Test Report: Docker_Linux_crio_arm64 21647

                    
                      f5f0858587e77e8c1559a01ec4b2a40a06b76dc9:2025-10-18:41961
                    
                

Test fail (39/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.34
35 TestAddons/parallel/Registry 15.09
36 TestAddons/parallel/RegistryCreds 0.63
37 TestAddons/parallel/Ingress 145.2
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 5.42
41 TestAddons/parallel/CSI 46.6
42 TestAddons/parallel/Headlamp 3.49
43 TestAddons/parallel/CloudSpanner 6.28
44 TestAddons/parallel/LocalPath 8.78
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.29
98 TestFunctional/parallel/ServiceCmdConnect 603.57
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
129 TestFunctional/parallel/ServiceCmd/DeployApp 600.89
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
148 TestFunctional/parallel/ServiceCmd/Format 0.46
149 TestFunctional/parallel/ServiceCmd/URL 0.45
178 TestMultiControlPlane/serial/RestartCluster 392.55
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 4.14
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.72
191 TestJSONOutput/pause/Command 2.44
197 TestJSONOutput/unpause/Command 1.57
281 TestPause/serial/Pause 8.7
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.55
303 TestStartStop/group/old-k8s-version/serial/Pause 6.53
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.38
316 TestStartStop/group/no-preload/serial/Pause 6.67
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.58
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.56
332 TestStartStop/group/embed-certs/serial/Pause 7.92
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.46
342 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.38
348 TestStartStop/group/newest-cni/serial/Pause 7.25
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable volcano --alsologtostderr -v=1: exit status 11 (336.798201ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:18:41.660530  842857 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:41.662343  842857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:41.662364  842857 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:41.662372  842857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:41.662715  842857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:18:41.663132  842857 mustload.go:65] Loading cluster: addons-206214
	I1018 12:18:41.663566  842857 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:41.663612  842857 addons.go:606] checking whether the cluster is paused
	I1018 12:18:41.663792  842857 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:41.663811  842857 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:18:41.664356  842857 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:18:41.682747  842857 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:41.682811  842857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:18:41.701199  842857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:18:41.807345  842857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:18:41.807431  842857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:18:41.870449  842857 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:18:41.870522  842857 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:18:41.870554  842857 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:18:41.870574  842857 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:18:41.870596  842857 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:18:41.870634  842857 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:18:41.870653  842857 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:18:41.870673  842857 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:18:41.870693  842857 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:18:41.870730  842857 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:18:41.870749  842857 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:18:41.870767  842857 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:18:41.870787  842857 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:18:41.870820  842857 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:18:41.870840  842857 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:18:41.870861  842857 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:18:41.870901  842857 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:18:41.870928  842857 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:18:41.870950  842857 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:18:41.870982  842857 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:18:41.871004  842857 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:18:41.871032  842857 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:18:41.871062  842857 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:18:41.871081  842857 cri.go:89] found id: ""
	I1018 12:18:41.871160  842857 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:41.888448  842857 out.go:203] 
	W1018 12:18:41.891566  842857 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:18:41.891694  842857 out.go:285] * 
	* 
	W1018 12:18:41.905534  842857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:18:41.908606  842857 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.263444ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003484159s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003063695s
addons_test.go:392: (dbg) Run:  kubectl --context addons-206214 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-206214 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-206214 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.570525787s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 ip
2025/10/18 12:19:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable registry --alsologtostderr -v=1: exit status 11 (254.814689ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:06.131877  843328 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:06.132572  843328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:06.132586  843328 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:06.132592  843328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:06.132842  843328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:06.133162  843328 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:06.133518  843328 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:06.133545  843328 addons.go:606] checking whether the cluster is paused
	I1018 12:19:06.133651  843328 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:06.133666  843328 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:06.134099  843328 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:06.151736  843328 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:06.151812  843328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:06.169884  843328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:06.274340  843328 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:06.274439  843328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:06.304138  843328 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:06.304176  843328 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:06.304182  843328 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:06.304186  843328 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:06.304189  843328 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:06.304193  843328 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:06.304197  843328 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:06.304199  843328 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:06.304202  843328 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:06.304212  843328 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:06.304217  843328 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:06.304220  843328 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:06.304223  843328 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:06.304226  843328 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:06.304229  843328 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:06.304236  843328 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:06.304242  843328 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:06.304246  843328 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:06.304250  843328 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:06.304252  843328 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:06.304257  843328 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:06.304261  843328 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:06.304268  843328 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:06.304271  843328 cri.go:89] found id: ""
	I1018 12:19:06.304322  843328 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:06.320209  843328 out.go:203] 
	W1018 12:19:06.323267  843328 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:06.323290  843328 out.go:285] * 
	* 
	W1018 12:19:06.329608  843328 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:06.332807  843328 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.09s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.63s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.315045ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-206214
addons_test.go:332: (dbg) Run:  kubectl --context addons-206214 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (300.251693ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:44.526825  845169 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:44.527717  845169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:44.527761  845169 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:44.527784  845169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:44.528081  845169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:44.528456  845169 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:44.528871  845169 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:44.528918  845169 addons.go:606] checking whether the cluster is paused
	I1018 12:19:44.529058  845169 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:44.529088  845169 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:44.529570  845169 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:44.553636  845169 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:44.553688  845169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:44.587194  845169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:44.694267  845169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:44.694349  845169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:44.725694  845169 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:44.725713  845169 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:44.725718  845169 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:44.725722  845169 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:44.725725  845169 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:44.725729  845169 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:44.725732  845169 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:44.725735  845169 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:44.725737  845169 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:44.725747  845169 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:44.725751  845169 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:44.725754  845169 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:44.725757  845169 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:44.725760  845169 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:44.725763  845169 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:44.725768  845169 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:44.725771  845169 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:44.725775  845169 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:44.725778  845169 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:44.725781  845169 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:44.725786  845169 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:44.725789  845169 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:44.725791  845169 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:44.725794  845169 cri.go:89] found id: ""
	I1018 12:19:44.725845  845169 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:44.743740  845169 out.go:203] 
	W1018 12:19:44.747051  845169 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:44.747131  845169 out.go:285] * 
	* 
	W1018 12:19:44.753662  845169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:44.756894  845169 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.63s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-206214 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-206214 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-206214 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7407d2e3-de6f-45ae-b358-a5ce1e4bfffa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7407d2e3-de6f-45ae-b358-a5ce1e4bfffa] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003331441s
I1018 12:19:53.566893  836086 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.459980032s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-206214 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-206214
helpers_test.go:243: (dbg) docker inspect addons-206214:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f",
	        "Created": "2025-10-18T12:16:12.378611685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 837263,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:12.437120529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/hosts",
	        "LogPath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f-json.log",
	        "Name": "/addons-206214",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-206214:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-206214",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f",
	                "LowerDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-206214",
	                "Source": "/var/lib/docker/volumes/addons-206214/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-206214",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-206214",
	                "name.minikube.sigs.k8s.io": "addons-206214",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f740c91afafc59e9248a84d737dbdca05e891463f6dfee035a60a805f126f8e",
	            "SandboxKey": "/var/run/docker/netns/8f740c91afaf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-206214": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:76:88:cd:71:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ff548bed0e14250dfa5ffdc0b374749a90eb9d54533761e2b63e7168112ae59",
	                    "EndpointID": "a33ad3dcd3a28fb0572474ed5a685d94d58128685a2318e4eb02dbb3280c000f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-206214",
	                        "17e1d1d7818d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-206214 -n addons-206214
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-206214 logs -n 25: (1.475949098s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-581361                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-581361 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ start   │ --download-only -p binary-mirror-959514 --alsologtostderr --binary-mirror http://127.0.0.1:36463 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-959514   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ delete  │ -p binary-mirror-959514                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-959514   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ addons  │ enable dashboard -p addons-206214                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-206214                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ start   │ -p addons-206214 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ addons-206214 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ addons-206214 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ addons-206214 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ ip      │ addons-206214 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ addons-206214 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ ssh     │ addons-206214 ssh cat /opt/local-path-provisioner/pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ addons-206214 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-206214 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-206214                                                                                                                                                                                                                                                                                                                                                                                           │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ addons-206214 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ ssh     │ addons-206214 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ ip      │ addons-206214 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:22 UTC │ 18 Oct 25 12:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:15:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:15:46.605994  836859 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:15:46.606162  836859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:46.606192  836859 out.go:374] Setting ErrFile to fd 2...
	I1018 12:15:46.606212  836859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:46.606842  836859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:15:46.607404  836859 out.go:368] Setting JSON to false
	I1018 12:15:46.608382  836859 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":14299,"bootTime":1760775448,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:15:46.608579  836859 start.go:141] virtualization:  
	I1018 12:15:46.663272  836859 out.go:179] * [addons-206214] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:15:46.696404  836859 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:15:46.696431  836859 notify.go:220] Checking for updates...
	I1018 12:15:46.760112  836859 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:15:46.791470  836859 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:15:46.808056  836859 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:15:46.840862  836859 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:15:46.873029  836859 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:15:46.905440  836859 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:15:46.931464  836859 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:15:46.931602  836859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:47.014191  836859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:15:46.994404725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:47.014311  836859 docker.go:318] overlay module found
	I1018 12:15:47.049561  836859 out.go:179] * Using the docker driver based on user configuration
	I1018 12:15:47.080960  836859 start.go:305] selected driver: docker
	I1018 12:15:47.080988  836859 start.go:925] validating driver "docker" against <nil>
	I1018 12:15:47.081004  836859 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:15:47.081786  836859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:47.140029  836859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:15:47.129650432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:47.140184  836859 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:15:47.140406  836859 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:15:47.176747  836859 out.go:179] * Using Docker driver with root privileges
	I1018 12:15:47.223561  836859 cni.go:84] Creating CNI manager for ""
	I1018 12:15:47.223645  836859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:15:47.223665  836859 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:15:47.223770  836859 start.go:349] cluster config:
	{Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 12:15:47.257792  836859 out.go:179] * Starting "addons-206214" primary control-plane node in "addons-206214" cluster
	I1018 12:15:47.290638  836859 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:15:47.322735  836859 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:15:47.353728  836859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:15:47.353820  836859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:15:47.353873  836859 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 12:15:47.353889  836859 cache.go:58] Caching tarball of preloaded images
	I1018 12:15:47.353971  836859 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:15:47.353980  836859 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:15:47.354308  836859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/config.json ...
	I1018 12:15:47.354327  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/config.json: {Name:mk339f447ad27da72d7095ab6ffb314a0c496a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:15:47.369413  836859 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:15:47.369569  836859 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:15:47.369589  836859 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 12:15:47.369594  836859 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 12:15:47.369601  836859 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 12:15:47.369606  836859 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:16:05.797936  836859 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:16:05.797976  836859 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:16:05.798016  836859 start.go:360] acquireMachinesLock for addons-206214: {Name:mk40010c192481362219c1375e984e4d3894f3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:16:05.798150  836859 start.go:364] duration metric: took 109.999µs to acquireMachinesLock for "addons-206214"
	I1018 12:16:05.798181  836859 start.go:93] Provisioning new machine with config: &{Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:16:05.798253  836859 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:16:05.801776  836859 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:16:05.802023  836859 start.go:159] libmachine.API.Create for "addons-206214" (driver="docker")
	I1018 12:16:05.802062  836859 client.go:168] LocalClient.Create starting
	I1018 12:16:05.802184  836859 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 12:16:06.078846  836859 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 12:16:06.745143  836859 cli_runner.go:164] Run: docker network inspect addons-206214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:16:06.761128  836859 cli_runner.go:211] docker network inspect addons-206214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:16:06.761214  836859 network_create.go:284] running [docker network inspect addons-206214] to gather additional debugging logs...
	I1018 12:16:06.761236  836859 cli_runner.go:164] Run: docker network inspect addons-206214
	W1018 12:16:06.776942  836859 cli_runner.go:211] docker network inspect addons-206214 returned with exit code 1
	I1018 12:16:06.776973  836859 network_create.go:287] error running [docker network inspect addons-206214]: docker network inspect addons-206214: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-206214 not found
	I1018 12:16:06.776988  836859 network_create.go:289] output of [docker network inspect addons-206214]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-206214 not found
	
	** /stderr **
	I1018 12:16:06.777105  836859 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:16:06.794146  836859 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c06510}
	I1018 12:16:06.794188  836859 network_create.go:124] attempt to create docker network addons-206214 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:16:06.794254  836859 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-206214 addons-206214
	I1018 12:16:06.849592  836859 network_create.go:108] docker network addons-206214 192.168.49.0/24 created
	I1018 12:16:06.849627  836859 kic.go:121] calculated static IP "192.168.49.2" for the "addons-206214" container
	I1018 12:16:06.849720  836859 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:16:06.865721  836859 cli_runner.go:164] Run: docker volume create addons-206214 --label name.minikube.sigs.k8s.io=addons-206214 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:16:06.884880  836859 oci.go:103] Successfully created a docker volume addons-206214
	I1018 12:16:06.884962  836859 cli_runner.go:164] Run: docker run --rm --name addons-206214-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-206214 --entrypoint /usr/bin/test -v addons-206214:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:16:07.918567  836859 cli_runner.go:217] Completed: docker run --rm --name addons-206214-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-206214 --entrypoint /usr/bin/test -v addons-206214:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (1.033566586s)
	I1018 12:16:07.918607  836859 oci.go:107] Successfully prepared a docker volume addons-206214
	I1018 12:16:07.918629  836859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:16:07.918648  836859 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:16:07.918730  836859 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-206214:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:16:12.309156  836859 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-206214:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.390386685s)
	I1018 12:16:12.309195  836859 kic.go:203] duration metric: took 4.390538966s to extract preloaded images to volume ...
	W1018 12:16:12.309357  836859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:16:12.309475  836859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:16:12.363707  836859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-206214 --name addons-206214 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-206214 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-206214 --network addons-206214 --ip 192.168.49.2 --volume addons-206214:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:16:12.650699  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Running}}
	I1018 12:16:12.676087  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:12.696375  836859 cli_runner.go:164] Run: docker exec addons-206214 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:16:12.745421  836859 oci.go:144] the created container "addons-206214" has a running status.
	I1018 12:16:12.745448  836859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa...
	I1018 12:16:13.473630  836859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:16:13.507100  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:13.528698  836859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:16:13.528720  836859 kic_runner.go:114] Args: [docker exec --privileged addons-206214 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:16:13.575509  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:13.595212  836859 machine.go:93] provisionDockerMachine start ...
	I1018 12:16:13.595313  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:13.616215  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:13.616535  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:13.616544  836859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:16:13.772140  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-206214
	
	I1018 12:16:13.772167  836859 ubuntu.go:182] provisioning hostname "addons-206214"
	I1018 12:16:13.772234  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:13.791603  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:13.792253  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:13.792269  836859 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-206214 && echo "addons-206214" | sudo tee /etc/hostname
	I1018 12:16:13.953348  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-206214
	
	I1018 12:16:13.953445  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:13.970560  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:13.970874  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:13.970898  836859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-206214' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-206214/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-206214' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:16:14.120024  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:16:14.120055  836859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:16:14.120076  836859 ubuntu.go:190] setting up certificates
	I1018 12:16:14.120085  836859 provision.go:84] configureAuth start
	I1018 12:16:14.120157  836859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-206214
	I1018 12:16:14.137327  836859 provision.go:143] copyHostCerts
	I1018 12:16:14.137412  836859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:16:14.137559  836859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:16:14.137624  836859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:16:14.137680  836859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.addons-206214 san=[127.0.0.1 192.168.49.2 addons-206214 localhost minikube]
	I1018 12:16:14.630678  836859 provision.go:177] copyRemoteCerts
	I1018 12:16:14.630753  836859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:16:14.630795  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:14.648200  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:14.751396  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:16:14.769011  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:16:14.786454  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:16:14.803476  836859 provision.go:87] duration metric: took 683.366326ms to configureAuth
	I1018 12:16:14.803506  836859 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:16:14.803805  836859 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:16:14.803919  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:14.820973  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:14.821271  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:14.821293  836859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:16:15.097143  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:16:15.097170  836859 machine.go:96] duration metric: took 1.501937957s to provisionDockerMachine
	I1018 12:16:15.097181  836859 client.go:171] duration metric: took 9.29510931s to LocalClient.Create
	I1018 12:16:15.097226  836859 start.go:167] duration metric: took 9.295203234s to libmachine.API.Create "addons-206214"
	I1018 12:16:15.097247  836859 start.go:293] postStartSetup for "addons-206214" (driver="docker")
	I1018 12:16:15.097259  836859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:16:15.097350  836859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:16:15.097422  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.117154  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.219990  836859 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:16:15.223395  836859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:16:15.223425  836859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:16:15.223436  836859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:16:15.223509  836859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:16:15.223532  836859 start.go:296] duration metric: took 126.277563ms for postStartSetup
	I1018 12:16:15.223874  836859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-206214
	I1018 12:16:15.240720  836859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/config.json ...
	I1018 12:16:15.241011  836859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:16:15.241062  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.258024  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.356845  836859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:16:15.361505  836859 start.go:128] duration metric: took 9.563234821s to createHost
	I1018 12:16:15.361527  836859 start.go:83] releasing machines lock for "addons-206214", held for 9.563364537s
	I1018 12:16:15.361598  836859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-206214
	I1018 12:16:15.378319  836859 ssh_runner.go:195] Run: cat /version.json
	I1018 12:16:15.378345  836859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:16:15.378372  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.378408  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.399804  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.400349  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.499436  836859 ssh_runner.go:195] Run: systemctl --version
	I1018 12:16:15.593950  836859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:16:15.630081  836859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:16:15.634577  836859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:16:15.634717  836859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:16:15.665383  836859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:16:15.665412  836859 start.go:495] detecting cgroup driver to use...
	I1018 12:16:15.665456  836859 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:16:15.665518  836859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:16:15.682974  836859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:16:15.695723  836859 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:16:15.695787  836859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:16:15.713016  836859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:16:15.731429  836859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:16:15.861253  836859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:16:16.031748  836859 docker.go:234] disabling docker service ...
	I1018 12:16:16.031854  836859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:16:16.061062  836859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:16:16.079489  836859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:16:16.205229  836859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:16:16.326235  836859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:16:16.339321  836859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:16:16.353029  836859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:16:16.353098  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.361953  836859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:16:16.362066  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.371472  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.380345  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.389519  836859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:16:16.397712  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.406303  836859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.420325  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.429454  836859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:16:16.437568  836859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:16:16.445394  836859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:16:16.568729  836859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:16:16.693310  836859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:16:16.693406  836859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:16:16.697284  836859 start.go:563] Will wait 60s for crictl version
	I1018 12:16:16.697398  836859 ssh_runner.go:195] Run: which crictl
	I1018 12:16:16.701252  836859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:16:16.725129  836859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:16:16.725303  836859 ssh_runner.go:195] Run: crio --version
	I1018 12:16:16.757209  836859 ssh_runner.go:195] Run: crio --version
	I1018 12:16:16.790620  836859 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:16:16.793427  836859 cli_runner.go:164] Run: docker network inspect addons-206214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:16:16.810823  836859 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:16:16.814717  836859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:16:16.825462  836859 kubeadm.go:883] updating cluster {Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:16:16.825584  836859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:16:16.825644  836859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:16:16.863359  836859 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:16:16.863385  836859 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:16:16.863450  836859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:16:16.889876  836859 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:16:16.889899  836859 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:16:16.889907  836859 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 12:16:16.890007  836859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-206214 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:16:16.890095  836859 ssh_runner.go:195] Run: crio config
	I1018 12:16:16.955218  836859 cni.go:84] Creating CNI manager for ""
	I1018 12:16:16.955243  836859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:16:16.955264  836859 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:16:16.955300  836859 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-206214 NodeName:addons-206214 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:16:16.955445  836859 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-206214"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:16:16.955538  836859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:16:16.963730  836859 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:16:16.963802  836859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:16:16.971715  836859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:16:16.985322  836859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:16:16.998332  836859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 12:16:17.013192  836859 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:16:17.017088  836859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:16:17.027441  836859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:16:17.136349  836859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:16:17.156230  836859 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214 for IP: 192.168.49.2
	I1018 12:16:17.156254  836859 certs.go:195] generating shared ca certs ...
	I1018 12:16:17.156279  836859 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:17.156413  836859 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:16:17.412844  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt ...
	I1018 12:16:17.412876  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt: {Name:mkc4b82375119f693df42479e770988d88209bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:17.413077  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key ...
	I1018 12:16:17.413091  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key: {Name:mk9dc014fc5eb975671220a3eb91be2810222359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:17.413181  836859 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:16:18.159062  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt ...
	I1018 12:16:18.159093  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt: {Name:mk8b05b47b979a21e25cd821712c7355198efc46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.159277  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key ...
	I1018 12:16:18.159291  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key: {Name:mk0a3c89ee9e87156cc868ddad1fe69147895d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.159378  836859 certs.go:257] generating profile certs ...
	I1018 12:16:18.159437  836859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.key
	I1018 12:16:18.159455  836859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt with IP's: []
	I1018 12:16:18.262762  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt ...
	I1018 12:16:18.262793  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: {Name:mk254d3cde411022409e72b75879c6d383301371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.262968  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.key ...
	I1018 12:16:18.262980  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.key: {Name:mk90e5a0c595911270645a3e5cb5dff0ed83334b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.263064  836859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a
	I1018 12:16:18.263084  836859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:16:18.494793  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a ...
	I1018 12:16:18.494827  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a: {Name:mk5d15bcfa121dc5f2850d18ad20cfda1c259aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.495027  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a ...
	I1018 12:16:18.495042  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a: {Name:mk31c1c9284ced2fdff5231fd7b185a244217b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.495131  836859 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt
	I1018 12:16:18.495223  836859 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key
	I1018 12:16:18.495278  836859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key
	I1018 12:16:18.495299  836859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt with IP's: []
	I1018 12:16:19.666883  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt ...
	I1018 12:16:19.666922  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt: {Name:mk1d0fa8d1a3516ad11b655da77daf84f8050b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:19.667120  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key ...
	I1018 12:16:19.667135  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key: {Name:mk1d26d66e7dfc8e55d2952c982f31454275e90d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:19.667331  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:16:19.667382  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:16:19.667406  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:16:19.667433  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:16:19.668132  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:16:19.688296  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:16:19.707143  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:16:19.726119  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:16:19.744947  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:16:19.763365  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:16:19.781952  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:16:19.800361  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:16:19.818372  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:16:19.836972  836859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:16:19.850754  836859 ssh_runner.go:195] Run: openssl version
	I1018 12:16:19.857387  836859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:16:19.866328  836859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:16:19.870082  836859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:16:19.870152  836859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:16:19.913955  836859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:16:19.922851  836859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:16:19.926612  836859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:16:19.926663  836859 kubeadm.go:400] StartCluster: {Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:16:19.926748  836859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:16:19.926814  836859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:16:19.957952  836859 cri.go:89] found id: ""
	I1018 12:16:19.958083  836859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:16:19.966161  836859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:16:19.974022  836859 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:16:19.974127  836859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:16:19.982173  836859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:16:19.982194  836859 kubeadm.go:157] found existing configuration files:
	
	I1018 12:16:19.982247  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:16:19.990159  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:16:19.990283  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:16:19.997760  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:16:20.015450  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:16:20.015582  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:16:20.024719  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:16:20.034056  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:16:20.034127  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:16:20.042848  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:16:20.051375  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:16:20.051452  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:16:20.061563  836859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:16:20.103508  836859 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:16:20.103793  836859 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:16:20.143148  836859 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:16:20.143228  836859 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:16:20.143269  836859 kubeadm.go:318] OS: Linux
	I1018 12:16:20.143322  836859 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:16:20.143376  836859 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:16:20.143429  836859 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:16:20.143494  836859 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:16:20.143549  836859 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:16:20.143604  836859 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:16:20.143679  836859 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:16:20.143735  836859 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:16:20.143791  836859 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:16:20.216958  836859 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:16:20.217077  836859 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:16:20.217177  836859 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:16:20.226117  836859 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:16:20.232953  836859 out.go:252]   - Generating certificates and keys ...
	I1018 12:16:20.233068  836859 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:16:20.233149  836859 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:16:20.702595  836859 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:16:21.006042  836859 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:16:21.532208  836859 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:16:21.791820  836859 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:16:22.557644  836859 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:16:22.557794  836859 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-206214 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:16:22.897996  836859 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:16:22.898430  836859 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-206214 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:16:23.023531  836859 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:16:23.471833  836859 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:16:23.591234  836859 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:16:23.591629  836859 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:16:23.659827  836859 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:16:24.134316  836859 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:16:25.555504  836859 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:16:26.058709  836859 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:16:26.866925  836859 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:16:26.867730  836859 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:16:26.871057  836859 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:16:26.874419  836859 out.go:252]   - Booting up control plane ...
	I1018 12:16:26.874521  836859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:16:26.874603  836859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:16:26.875771  836859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:16:26.909645  836859 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:16:26.909922  836859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:16:26.917819  836859 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:16:26.918093  836859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:16:26.918282  836859 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:16:27.062107  836859 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:16:27.062237  836859 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:16:29.063901  836859 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001823146s
	I1018 12:16:29.067468  836859 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:16:29.067568  836859 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:16:29.067974  836859 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:16:29.068066  836859 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:16:35.064462  836859 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.996746998s
	I1018 12:16:35.351942  836859 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.283606895s
	I1018 12:16:36.069984  836859 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002230901s
	I1018 12:16:36.090675  836859 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:16:36.108793  836859 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:16:36.126872  836859 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:16:36.127127  836859 kubeadm.go:318] [mark-control-plane] Marking the node addons-206214 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:16:36.139768  836859 kubeadm.go:318] [bootstrap-token] Using token: khshsh.o8s9b5n83lhecxu7
	I1018 12:16:36.142911  836859 out.go:252]   - Configuring RBAC rules ...
	I1018 12:16:36.143042  836859 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:16:36.147439  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:16:36.155525  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:16:36.161846  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:16:36.166413  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:16:36.170771  836859 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:16:36.483846  836859 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:16:36.910200  836859 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:16:37.477268  836859 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:16:37.478716  836859 kubeadm.go:318] 
	I1018 12:16:37.478797  836859 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:16:37.478803  836859 kubeadm.go:318] 
	I1018 12:16:37.478883  836859 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:16:37.478888  836859 kubeadm.go:318] 
	I1018 12:16:37.478925  836859 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:16:37.479485  836859 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:16:37.479545  836859 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:16:37.479551  836859 kubeadm.go:318] 
	I1018 12:16:37.479607  836859 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:16:37.479612  836859 kubeadm.go:318] 
	I1018 12:16:37.479684  836859 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:16:37.479690  836859 kubeadm.go:318] 
	I1018 12:16:37.479744  836859 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:16:37.479821  836859 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:16:37.479892  836859 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:16:37.479896  836859 kubeadm.go:318] 
	I1018 12:16:37.479983  836859 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:16:37.480065  836859 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:16:37.480070  836859 kubeadm.go:318] 
	I1018 12:16:37.480157  836859 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token khshsh.o8s9b5n83lhecxu7 \
	I1018 12:16:37.480264  836859 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 12:16:37.480285  836859 kubeadm.go:318] 	--control-plane 
	I1018 12:16:37.480290  836859 kubeadm.go:318] 
	I1018 12:16:37.480378  836859 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:16:37.480383  836859 kubeadm.go:318] 
	I1018 12:16:37.480468  836859 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token khshsh.o8s9b5n83lhecxu7 \
	I1018 12:16:37.480574  836859 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 12:16:37.482970  836859 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:16:37.483201  836859 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:16:37.483309  836859 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:16:37.483326  836859 cni.go:84] Creating CNI manager for ""
	I1018 12:16:37.483335  836859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:16:37.488466  836859 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:16:37.491409  836859 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:16:37.495470  836859 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:16:37.495491  836859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:16:37.508209  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:16:37.795180  836859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:16:37.795326  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:37.795397  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-206214 minikube.k8s.io/updated_at=2025_10_18T12_16_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-206214 minikube.k8s.io/primary=true
	I1018 12:16:38.026370  836859 ops.go:34] apiserver oom_adj: -16
	I1018 12:16:38.026483  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:38.526604  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:39.026647  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:39.526601  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:40.028102  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:40.527002  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:41.027368  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:41.526890  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:41.630380  836859 kubeadm.go:1113] duration metric: took 3.835110806s to wait for elevateKubeSystemPrivileges
	I1018 12:16:41.630420  836859 kubeadm.go:402] duration metric: took 21.703762784s to StartCluster
	I1018 12:16:41.630449  836859 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:41.630575  836859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:16:41.630982  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:41.631187  836859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:16:41.631334  836859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:16:41.631590  836859 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:16:41.631598  836859 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:16:41.631734  836859 addons.go:69] Setting yakd=true in profile "addons-206214"
	I1018 12:16:41.631749  836859 addons.go:238] Setting addon yakd=true in "addons-206214"
	I1018 12:16:41.631748  836859 addons.go:69] Setting inspektor-gadget=true in profile "addons-206214"
	I1018 12:16:41.631762  836859 addons.go:238] Setting addon inspektor-gadget=true in "addons-206214"
	I1018 12:16:41.631775  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.631782  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.632258  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.632297  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.632801  836859 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-206214"
	I1018 12:16:41.632821  836859 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-206214"
	I1018 12:16:41.632855  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.633264  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.636388  836859 addons.go:69] Setting cloud-spanner=true in profile "addons-206214"
	I1018 12:16:41.636432  836859 addons.go:238] Setting addon cloud-spanner=true in "addons-206214"
	I1018 12:16:41.636467  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.636922  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.637646  836859 addons.go:69] Setting metrics-server=true in profile "addons-206214"
	I1018 12:16:41.637710  836859 addons.go:238] Setting addon metrics-server=true in "addons-206214"
	I1018 12:16:41.637753  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.638283  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.640544  836859 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-206214"
	I1018 12:16:41.640610  836859 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-206214"
	I1018 12:16:41.640645  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.641096  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.647944  836859 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-206214"
	I1018 12:16:41.647980  836859 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-206214"
	I1018 12:16:41.648033  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.648498  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.653826  836859 addons.go:69] Setting default-storageclass=true in profile "addons-206214"
	I1018 12:16:41.653860  836859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-206214"
	I1018 12:16:41.654262  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.663080  836859 addons.go:69] Setting registry=true in profile "addons-206214"
	I1018 12:16:41.663109  836859 addons.go:238] Setting addon registry=true in "addons-206214"
	I1018 12:16:41.663154  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.663627  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.675680  836859 addons.go:69] Setting registry-creds=true in profile "addons-206214"
	I1018 12:16:41.675774  836859 addons.go:238] Setting addon registry-creds=true in "addons-206214"
	I1018 12:16:41.675846  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.677232  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.692133  836859 addons.go:69] Setting gcp-auth=true in profile "addons-206214"
	I1018 12:16:41.692226  836859 mustload.go:65] Loading cluster: addons-206214
	I1018 12:16:41.695328  836859 addons.go:69] Setting ingress=true in profile "addons-206214"
	I1018 12:16:41.695357  836859 addons.go:238] Setting addon ingress=true in "addons-206214"
	I1018 12:16:41.695401  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.695907  836859 addons.go:69] Setting storage-provisioner=true in profile "addons-206214"
	I1018 12:16:41.695926  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.696078  836859 addons.go:238] Setting addon storage-provisioner=true in "addons-206214"
	I1018 12:16:41.696127  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.696567  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.714712  836859 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-206214"
	I1018 12:16:41.714756  836859 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-206214"
	I1018 12:16:41.715112  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.719272  836859 addons.go:69] Setting ingress-dns=true in profile "addons-206214"
	I1018 12:16:41.719308  836859 addons.go:238] Setting addon ingress-dns=true in "addons-206214"
	I1018 12:16:41.719351  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.719827  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.734936  836859 addons.go:69] Setting volcano=true in profile "addons-206214"
	I1018 12:16:41.734975  836859 addons.go:238] Setting addon volcano=true in "addons-206214"
	I1018 12:16:41.735090  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.735562  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.744200  836859 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:16:41.747083  836859 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:16:41.747112  836859 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:16:41.747187  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.755227  836859 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:16:41.755537  836859 addons.go:69] Setting volumesnapshots=true in profile "addons-206214"
	I1018 12:16:41.755558  836859 addons.go:238] Setting addon volumesnapshots=true in "addons-206214"
	I1018 12:16:41.755600  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.756081  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.758318  836859 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:16:41.758342  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:16:41.758515  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.780471  836859 out.go:179] * Verifying Kubernetes components...
	I1018 12:16:41.783603  836859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:16:41.783980  836859 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:16:41.787942  836859 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:16:41.788231  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.790016  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:16:41.790037  836859 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:16:41.790125  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.811017  836859 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:16:41.830127  836859 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:16:41.836037  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:16:41.836230  836859 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:16:41.836236  836859 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:16:41.837610  836859 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:16:41.857430  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:16:41.857512  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.867412  836859 addons.go:238] Setting addon default-storageclass=true in "addons-206214"
	I1018 12:16:41.867454  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.872186  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.888858  836859 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:16:41.888880  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:16:41.888942  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.895535  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:41.896830  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:16:41.896865  836859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:16:41.896986  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.932120  836859 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:16:41.935076  836859 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:16:41.935102  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:16:41.935172  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.938191  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:16:41.944306  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:16:41.950810  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:16:41.955071  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:16:41.956551  836859 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:16:41.985832  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:16:41.989397  836859 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-206214"
	I1018 12:16:41.989441  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.989839  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.999167  836859 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:16:42.005037  836859 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:16:42.005061  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:16:42.005138  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	W1018 12:16:42.010733  836859 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 12:16:42.011318  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.015109  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.017001  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:16:42.018157  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:16:42.018394  836859 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:16:42.044433  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:16:42.047526  836859 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:16:42.047551  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:16:42.047619  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.056096  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:16:42.062640  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:16:42.062729  836859 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:16:42.062845  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.078273  836859 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:16:42.078298  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:16:42.078368  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.103512  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:16:42.108320  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:42.112958  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:16:42.115776  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:16:42.115808  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:16:42.115889  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.120471  836859 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:16:42.120495  836859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:16:42.120570  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.135955  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.137027  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.137877  836859 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:16:42.137895  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:16:42.137984  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.167949  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.169606  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.247961  836859 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:16:42.250917  836859 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:16:42.253853  836859 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:16:42.253876  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:16:42.253951  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.261078  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.288727  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.289680  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.293085  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.319977  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.320020  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.320692  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	W1018 12:16:42.323930  836859 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:16:42.323968  836859 retry.go:31] will retry after 209.516068ms: ssh: handshake failed: EOF
	I1018 12:16:42.347291  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.484885  836859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:16:42.485179  836859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:16:42.732110  836859 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:42.732188  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:16:42.822657  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:16:43.001561  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:16:43.001653  836859 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:16:43.018453  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:43.021229  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:16:43.021304  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:16:43.136083  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:16:43.203538  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:16:43.203618  836859 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:16:43.226238  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:16:43.226314  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:16:43.226842  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:16:43.270726  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:16:43.274133  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:16:43.293991  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:16:43.313183  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:16:43.313260  836859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:16:43.373168  836859 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:16:43.373251  836859 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:16:43.374283  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:16:43.374352  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:16:43.419239  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:16:43.419321  836859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:16:43.468322  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:16:43.468471  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:16:43.468503  836859 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:16:43.472311  836859 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:16:43.472390  836859 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:16:43.492181  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:16:43.504445  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:16:43.547487  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:16:43.547570  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:16:43.591322  836859 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:16:43.591403  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:16:43.649945  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:16:43.676539  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:16:43.676618  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:16:43.695399  836859 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:16:43.695481  836859 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:16:43.801374  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:16:43.804892  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:16:43.804967  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:16:43.929143  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:16:43.949889  836859 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:16:43.949981  836859 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:16:44.108331  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:16:44.108413  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:16:44.288955  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:16:44.289032  836859 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:16:44.345598  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:16:44.345671  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:16:44.567033  836859 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:16:44.567054  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:16:44.644853  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:16:44.644876  836859 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:16:44.662069  836859 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.176839614s)
	I1018 12:16:44.662096  836859 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:16:44.663073  836859 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.178112579s)
	I1018 12:16:44.664002  836859 node_ready.go:35] waiting up to 6m0s for node "addons-206214" to be "Ready" ...
	I1018 12:16:44.664224  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.841540249s)
	I1018 12:16:44.809758  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:16:44.898048  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:16:44.898125  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:16:45.195103  836859 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-206214" context rescaled to 1 replicas
	I1018 12:16:45.252656  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:16:45.252745  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:16:45.431798  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:16:45.431888  836859 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 12:16:45.579903  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1018 12:16:46.733897  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:47.534082  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.515537899s)
	W1018 12:16:47.534114  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:47.534134  836859 retry.go:31] will retry after 154.885209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:47.534187  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.398031205s)
	I1018 12:16:47.534228  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.307334672s)
	I1018 12:16:47.689389  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:48.642834  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.348758503s)
	I1018 12:16:48.642952  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.174566557s)
	I1018 12:16:48.643232  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.150972102s)
	I1018 12:16:48.643325  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.138805838s)
	I1018 12:16:48.643524  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.99349581s)
	I1018 12:16:48.643563  836859 addons.go:479] Verifying addon metrics-server=true in "addons-206214"
	I1018 12:16:48.643624  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.842175951s)
	I1018 12:16:48.643671  836859 addons.go:479] Verifying addon registry=true in "addons-206214"
	I1018 12:16:48.643897  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.373103996s)
	I1018 12:16:48.643977  836859 addons.go:479] Verifying addon ingress=true in "addons-206214"
	I1018 12:16:48.644081  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.714869978s)
	I1018 12:16:48.643939  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.369730972s)
	I1018 12:16:48.646926  836859 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-206214 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:16:48.647036  836859 out.go:179] * Verifying registry addon...
	I1018 12:16:48.647086  836859 out.go:179] * Verifying ingress addon...
	I1018 12:16:48.651851  836859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:16:48.652848  836859 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:16:48.663609  836859 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:16:48.663715  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:48.664366  836859 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:16:48.664420  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:48.677686  836859 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 12:16:48.694639  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.884788351s)
	W1018 12:16:48.694681  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:16:48.694703  836859 retry.go:31] will retry after 311.198419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:16:49.006836  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:16:49.087542  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.507525624s)
	I1018 12:16:49.087710  836859 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-206214"
	I1018 12:16:49.092826  836859 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:16:49.096565  836859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:16:49.101965  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.412496127s)
	W1018 12:16:49.102050  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:49.102087  836859 retry.go:31] will retry after 392.224234ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:49.118678  836859 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:16:49.118748  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:16:49.167236  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:49.219195  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:49.219534  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:49.495279  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:49.601157  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:49.655363  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:49.657998  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:49.718048  836859 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:16:49.718168  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:49.740929  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:49.873939  836859 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:16:49.893156  836859 addons.go:238] Setting addon gcp-auth=true in "addons-206214"
	I1018 12:16:49.893203  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:49.893655  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:49.913669  836859 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:16:49.913742  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:49.933620  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:50.101833  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:50.156884  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:50.157351  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:50.599814  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:50.654900  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:50.655624  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:51.099983  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:51.156227  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:51.156372  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:51.167591  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:51.601525  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:51.657186  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:51.657461  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:51.819923  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.324606785s)
	W1018 12:16:51.819964  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:51.819983  836859 retry.go:31] will retry after 704.903605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:51.820038  836859 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.906346878s)
	I1018 12:16:51.820206  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.812914467s)
	I1018 12:16:51.823195  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:16:51.826151  836859 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:16:51.829091  836859 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:16:51.829125  836859 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:16:51.843217  836859 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:16:51.843297  836859 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:16:51.863007  836859 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:16:51.863038  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:16:51.880455  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:16:52.100794  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:52.157214  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:52.157658  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:52.404648  836859 addons.go:479] Verifying addon gcp-auth=true in "addons-206214"
	I1018 12:16:52.407882  836859 out.go:179] * Verifying gcp-auth addon...
	I1018 12:16:52.412804  836859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:16:52.421132  836859 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:16:52.421157  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:52.525545  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:52.602922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:52.657184  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:52.657772  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:52.916464  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:53.100785  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:53.157744  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:53.158178  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:53.168834  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	W1018 12:16:53.343845  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:53.343882  836859 retry.go:31] will retry after 960.020876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:53.415786  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:53.599729  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:53.655750  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:53.656042  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:53.916188  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:54.100185  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:54.155522  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:54.156826  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:54.304838  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:54.416281  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:54.600857  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:54.656940  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:54.657418  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:54.916567  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:55.100704  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:16:55.123902  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:55.123934  836859 retry.go:31] will retry after 1.824477957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:55.156037  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:55.156369  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:55.416465  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:55.600265  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:55.654967  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:55.656209  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:55.668726  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:55.916077  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:56.099937  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:56.155007  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:56.156304  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:56.415775  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:56.600046  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:56.655290  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:56.656356  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:56.915754  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:56.948803  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:57.100807  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:57.155851  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:57.157859  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:57.416608  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:57.601155  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:57.656844  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:57.658160  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:57.761048  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:57.761081  836859 retry.go:31] will retry after 2.20875503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:57.916316  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:58.100492  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:58.155843  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:58.157464  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:58.167580  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:58.416784  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:58.599721  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:58.655765  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:58.656008  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:58.916788  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:59.099513  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:59.155752  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:59.156168  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:59.417000  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:59.599632  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:59.655894  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:59.656072  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:59.917019  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:59.970334  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:00.101371  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:00.164886  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:00.165651  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:00.171158  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:00.418491  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:00.601608  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:00.657424  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:00.658076  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:00.916566  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:01.046747  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.076365369s)
	W1018 12:17:01.046785  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:01.046805  836859 retry.go:31] will retry after 3.668859693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:01.100249  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:01.157095  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:01.157212  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:01.416612  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:01.599895  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:01.656777  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:01.658044  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:01.917903  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:02.100232  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:02.154949  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:02.156835  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:02.416242  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:02.600557  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:02.656591  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:02.656895  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:02.667070  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:02.916376  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:03.100970  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:03.156575  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:03.156863  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:03.416093  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:03.601341  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:03.701671  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:03.702720  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:03.917501  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:04.100924  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:04.155229  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:04.155741  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:04.416101  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:04.600528  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:04.655480  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:04.656557  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:04.667724  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:04.715866  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:04.916579  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:05.099810  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:05.157767  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:05.158649  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:05.416268  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:17:05.600720  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:05.600814  836859 retry.go:31] will retry after 5.24493786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:05.605972  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:05.657115  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:05.658716  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:05.917033  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:06.100548  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:06.155624  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:06.157075  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:06.416664  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:06.600139  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:06.656587  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:06.656772  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:06.667830  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:06.916313  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:07.100201  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:07.155991  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:07.156081  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:07.416927  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:07.600481  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:07.655694  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:07.656780  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:07.917175  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:08.100504  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:08.156768  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:08.157019  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:08.415729  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:08.599697  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:08.656121  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:08.656320  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:08.916216  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:09.100591  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:09.155407  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:09.156061  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:09.167855  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:09.416064  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:09.600071  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:09.655982  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:09.656039  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:09.916944  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:10.099996  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:10.154785  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:10.155997  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:10.415811  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:10.599723  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:10.655566  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:10.656065  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:10.846462  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:10.916789  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:11.100593  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:11.157635  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:11.157978  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:11.428365  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:11.600090  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:11.659392  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:11.663260  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:11.668212  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	W1018 12:17:11.704373  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:11.704439  836859 retry.go:31] will retry after 3.739788043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:11.916752  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:12.100437  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:12.155866  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:12.156589  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:12.415672  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:12.599930  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:12.656492  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:12.656934  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:12.916411  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:13.100323  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:13.155424  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:13.156297  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:13.416527  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:13.600746  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:13.654664  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:13.655784  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:13.916610  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:14.100557  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:14.155248  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:14.156485  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:14.167008  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:14.416483  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:14.600416  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:14.655001  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:14.656655  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:14.916171  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:15.100450  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:15.155459  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:15.156267  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:15.416376  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:15.445439  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:15.599986  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:15.657325  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:15.658000  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:15.916976  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:16.099935  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:16.156060  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:16.156174  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:16.167174  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	W1018 12:17:16.265258  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:16.265289  836859 retry.go:31] will retry after 14.389338895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:16.416417  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:16.599396  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:16.655949  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:16.656042  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:16.917084  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:17.100223  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:17.155465  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:17.156641  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:17.416049  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:17.600487  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:17.655406  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:17.655963  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:17.917469  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:18.100515  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:18.155434  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:18.156784  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:18.167782  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:18.415771  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:18.600082  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:18.655900  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:18.656238  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:18.915868  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:19.100014  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:19.154830  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:19.156344  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:19.415910  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:19.600125  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:19.655069  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:19.656711  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:19.916717  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:20.100060  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:20.154726  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:20.155873  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:20.167825  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:20.415886  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:20.599840  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:20.654630  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:20.655769  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:20.916389  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:21.100989  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:21.154947  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:21.155774  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:21.415924  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:21.599822  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:21.654699  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:21.656014  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:21.916377  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:22.100669  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:22.156257  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:22.156451  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:22.417034  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:22.599875  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:22.655114  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:22.656029  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:22.666868  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:22.916318  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:23.100427  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:23.155596  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:23.156388  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:23.416573  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:23.599749  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:23.655761  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:23.655912  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:23.920032  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:24.120192  836859 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:17:24.120273  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:24.228046  836859 node_ready.go:49] node "addons-206214" is "Ready"
	I1018 12:17:24.228126  836859 node_ready.go:38] duration metric: took 39.564084165s for node "addons-206214" to be "Ready" ...
	I1018 12:17:24.228154  836859 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:24.228239  836859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:24.252961  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:24.253506  836859 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:17:24.253554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:24.255982  836859 api_server.go:72] duration metric: took 42.624761246s to wait for apiserver process to appear ...
	I1018 12:17:24.256041  836859 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:24.256075  836859 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:17:24.275735  836859 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:17:24.278615  836859 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:24.278638  836859 api_server.go:131] duration metric: took 22.576602ms to wait for apiserver health ...
	I1018 12:17:24.278647  836859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:24.288066  836859 system_pods.go:59] 19 kube-system pods found
	I1018 12:17:24.288151  836859 system_pods.go:61] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.288177  836859 system_pods.go:61] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.288216  836859 system_pods.go:61] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending
	I1018 12:17:24.288242  836859 system_pods.go:61] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending
	I1018 12:17:24.288262  836859 system_pods.go:61] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.288283  836859 system_pods.go:61] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.288304  836859 system_pods.go:61] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.288339  836859 system_pods.go:61] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.288362  836859 system_pods.go:61] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending
	I1018 12:17:24.288383  836859 system_pods.go:61] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.288414  836859 system_pods.go:61] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.288440  836859 system_pods.go:61] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending
	I1018 12:17:24.288459  836859 system_pods.go:61] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending
	I1018 12:17:24.288482  836859 system_pods.go:61] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.288517  836859 system_pods.go:61] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.288540  836859 system_pods.go:61] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending
	I1018 12:17:24.288558  836859 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending
	I1018 12:17:24.288578  836859 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending
	I1018 12:17:24.288602  836859 system_pods.go:61] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.288631  836859 system_pods.go:74] duration metric: took 9.977156ms to wait for pod list to return data ...
	I1018 12:17:24.288658  836859 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:24.296395  836859 default_sa.go:45] found service account: "default"
	I1018 12:17:24.296466  836859 default_sa.go:55] duration metric: took 7.786292ms for default service account to be created ...
	I1018 12:17:24.296490  836859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:24.317761  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:24.317843  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.317871  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.317914  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending
	I1018 12:17:24.317942  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending
	I1018 12:17:24.317962  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.317983  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.318016  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.318039  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.318057  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending
	I1018 12:17:24.318076  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.318097  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.318127  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending
	I1018 12:17:24.318155  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:24.318179  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.318203  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.318235  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending
	I1018 12:17:24.318262  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending
	I1018 12:17:24.318283  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending
	I1018 12:17:24.318306  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.318352  836859 retry.go:31] will retry after 268.188257ms: missing components: kube-dns
	I1018 12:17:24.428441  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:24.592755  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:24.592852  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.592878  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.592919  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:17:24.592954  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:17:24.592975  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.592997  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.593027  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.593049  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.593069  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:17:24.593089  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.593109  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.593138  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:17:24.593164  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:24.593185  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.593207  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.593237  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:17:24.593261  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.593284  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.593306  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.593348  836859 retry.go:31] will retry after 318.991686ms: missing components: kube-dns
	I1018 12:17:24.691702  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:24.691937  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:24.692687  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:24.920904  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:24.924994  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:24.925074  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.925101  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.925142  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:17:24.925167  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:17:24.925186  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.925208  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.925240  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.925263  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.925287  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:17:24.925307  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.925342  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.925367  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:17:24.925388  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:24.925410  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.925443  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.925468  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:17:24.925490  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.925514  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.925549  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.925581  836859 retry.go:31] will retry after 401.03519ms: missing components: kube-dns
	I1018 12:17:25.106077  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:25.202888  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:25.203462  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:25.340827  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:25.348947  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Running
	I1018 12:17:25.349020  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:25.349046  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:17:25.349071  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:17:25.349113  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:25.349134  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:25.349155  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:25.349187  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:25.349212  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:17:25.349230  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:25.349251  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:25.349286  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:17:25.349312  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:25.349335  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:25.349359  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:25.349390  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:17:25.349419  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:25.349443  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:25.349465  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Running
	I1018 12:17:25.349504  836859 system_pods.go:126] duration metric: took 1.052993447s to wait for k8s-apps to be running ...
	I1018 12:17:25.349530  836859 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:25.349624  836859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:25.368544  836859 system_svc.go:56] duration metric: took 19.004594ms WaitForService to wait for kubelet
	I1018 12:17:25.368616  836859 kubeadm.go:586] duration metric: took 43.737397207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:25.368655  836859 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:25.377452  836859 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:17:25.377537  836859 node_conditions.go:123] node cpu capacity is 2
	I1018 12:17:25.377563  836859 node_conditions.go:105] duration metric: took 8.886824ms to run NodePressure ...
	I1018 12:17:25.377589  836859 start.go:241] waiting for startup goroutines ...
	I1018 12:17:25.416512  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:25.600809  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:25.656390  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:25.656757  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:25.916200  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:26.100574  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:26.156793  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:26.157013  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:26.416611  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:26.602808  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:26.658391  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:26.659128  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:26.921324  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:27.104044  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:27.160966  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:27.161286  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:27.418508  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:27.602496  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:27.660667  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:27.660912  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:27.916666  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:28.100806  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:28.157616  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:28.157866  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:28.415720  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:28.601415  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:28.665711  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:28.666386  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:28.916907  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:29.101232  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:29.162427  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:29.162805  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:29.417793  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:29.605204  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:29.662306  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:29.662805  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:29.916754  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:30.104450  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:30.162951  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:30.163472  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:30.417869  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:30.602395  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:30.655423  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:30.660880  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:30.661641  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:30.916077  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:31.101322  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:31.157703  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:31.158034  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:31.416143  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:31.600054  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:31.657391  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:31.657748  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:31.861871  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.206311386s)
	W1018 12:17:31.861903  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:31.861923  836859 retry.go:31] will retry after 9.225962136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:31.916436  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:32.099690  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:32.156856  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:32.157783  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:32.416180  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:32.600941  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:32.656594  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:32.656974  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:32.916636  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:33.100971  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:33.157333  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:33.159200  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:33.416572  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:33.600633  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:33.662588  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:33.663039  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:33.916594  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:34.100622  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:34.156333  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:34.156447  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:34.416588  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:34.600953  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:34.655330  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:34.656082  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:34.916655  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:35.100926  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:35.157164  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:35.158699  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:35.416922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:35.600886  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:35.658030  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:35.658626  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:35.917158  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:36.101068  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:36.157402  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:36.157833  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:36.416153  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:36.601229  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:36.658007  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:36.658590  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:36.917285  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:37.101680  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:37.155845  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:37.156760  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:37.417228  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:37.600905  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:37.655021  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:37.656376  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:37.917711  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:38.100499  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:38.158139  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:38.160153  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:38.418178  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:38.600476  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:38.657488  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:38.658361  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:38.916361  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:39.110782  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:39.158003  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:39.158643  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:39.417582  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:39.601630  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:39.659424  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:39.660353  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:39.917446  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:40.100613  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:40.156026  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:40.156160  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:40.416818  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:40.600790  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:40.655700  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:40.657053  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:40.918564  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:41.088872  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:41.101804  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:41.158372  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:41.158849  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:41.416626  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:41.603343  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:41.660301  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:41.660803  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:41.917411  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:42.103527  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:42.149451  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.060480249s)
	W1018 12:17:42.149507  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:42.149531  836859 retry.go:31] will retry after 22.313412643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:42.157551  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:42.158071  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:42.416558  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:42.600294  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:42.655223  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:42.657221  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:42.916995  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:43.100405  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:43.157259  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:43.157795  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:43.416261  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:43.601149  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:43.656287  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:43.656613  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:43.917129  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:44.101364  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:44.158416  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:44.158781  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:44.416022  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:44.600921  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:44.656168  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:44.657091  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:44.916509  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:45.112469  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:45.161811  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:45.164209  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:45.418479  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:45.599630  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:45.656795  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:45.656951  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:45.916559  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:46.100309  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:46.156086  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:46.157698  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:46.416542  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:46.599846  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:46.655122  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:46.657675  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:46.916778  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:47.101285  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:47.156138  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:47.157960  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:47.416612  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:47.600165  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:47.656271  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:47.658395  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:47.917017  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:48.100802  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:48.156641  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:48.157089  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:48.417951  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:48.601037  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:48.655143  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:48.656260  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:48.917543  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:49.099876  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:49.156529  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:49.157015  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:49.418379  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:49.600957  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:49.655191  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:49.656318  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:49.916904  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:50.100909  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:50.157045  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:50.157403  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:50.417473  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:50.601132  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:50.657290  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:50.657486  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:50.916879  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:51.104873  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:51.202409  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:51.202590  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:51.416748  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:51.601375  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:51.655677  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:51.656329  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:51.917986  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:52.100404  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:52.155265  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:52.158219  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:52.423064  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:52.601363  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:52.657957  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:52.659346  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:52.917122  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:53.100695  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:53.158615  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:53.158724  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:53.416829  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:53.600637  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:53.656713  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:53.657248  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:53.916227  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:54.100616  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:54.157539  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:54.160203  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:54.416426  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:54.602016  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:54.657136  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:54.657517  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:54.918292  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:55.101360  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:55.157501  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:55.157962  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:55.438732  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:55.600405  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:55.655554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:55.656593  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:55.917470  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:56.101432  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:56.155309  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:56.156907  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:56.416225  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:56.601172  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:56.655922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:56.657171  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:56.916169  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:57.101292  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:57.157823  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:57.158222  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:57.417475  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:57.602420  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:57.660812  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:57.660979  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:57.916459  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:58.100611  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:58.156866  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:58.157898  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:58.416613  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:58.600402  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:58.655931  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:58.658259  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:58.916548  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:59.100583  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:59.157368  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:59.157786  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:59.417740  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:59.600804  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:59.658402  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:59.658819  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:59.916667  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:00.128940  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:00.169770  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:00.170287  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:00.417415  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:00.600981  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:00.654922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:00.657547  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:00.916411  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:01.100759  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:01.157275  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:01.157730  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:01.417278  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:01.600306  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:01.656870  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:01.657069  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:01.916017  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:02.100625  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:02.156498  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:02.157792  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:02.417005  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:02.600282  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:02.656670  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:02.656868  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:02.916744  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:03.100521  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:03.156601  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:03.157752  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:03.417433  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:03.600745  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:03.657337  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:03.657520  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:03.916633  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:04.099965  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:04.156728  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:04.156848  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:04.416591  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:04.463721  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:18:04.600614  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:04.656282  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:04.656824  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:04.915976  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:05.101430  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:05.157638  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:05.157743  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:05.420072  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:05.600560  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:05.601403  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.137640602s)
	W1018 12:18:05.601436  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:18:05.601454  836859 retry.go:31] will retry after 33.384168177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:18:05.657350  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:05.657653  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:05.917128  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:06.100482  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:06.156938  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:06.157086  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:06.416599  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:06.600243  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:06.657657  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:06.657779  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:06.916301  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:07.101431  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:07.156926  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:07.157536  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:07.417351  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:07.601508  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:07.656764  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:07.657006  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:07.916001  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:08.100356  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:08.156572  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:08.157238  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:08.416654  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:08.600398  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:08.656438  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:08.656759  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:08.916705  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:09.105202  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:09.163711  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:09.163869  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:09.416919  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:09.601190  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:09.658221  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:09.658595  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:09.917179  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:10.100860  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:10.162037  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:10.162788  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:10.416360  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:10.600249  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:10.655268  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:10.656670  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:10.915900  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:11.099880  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:11.156225  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:11.157059  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:11.416431  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:11.601265  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:11.656716  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:11.657134  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:11.916422  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:12.100556  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:12.157179  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:12.157585  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:12.416178  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:12.600525  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:12.656613  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:12.657065  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:12.916570  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:13.100485  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:13.156180  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:13.156489  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:13.417275  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:13.601537  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:13.655906  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:13.656014  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:13.915955  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:14.099924  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:14.156938  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:14.157292  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:14.416551  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:14.600693  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:14.656636  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:14.657570  836859 kapi.go:107] duration metric: took 1m26.005719538s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:18:14.916759  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:15.100305  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:15.158985  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:15.416733  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:15.601172  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:15.656494  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:15.917662  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:16.101040  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:16.156388  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:16.416641  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:16.600127  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:16.657008  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:16.916650  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:17.100765  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:17.156078  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:17.416574  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:17.601098  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:17.656251  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:17.917008  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:18.101147  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:18.156204  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:18.416838  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:18.601026  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:18.659606  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:18.916507  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:19.101756  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:19.157316  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:19.417724  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:19.601133  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:19.657850  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:19.917995  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:20.103374  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:20.156927  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:20.418733  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:20.600766  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:20.657394  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:20.930554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:21.101605  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:21.157475  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:21.420884  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:21.600876  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:21.656549  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:21.918453  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:22.104009  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:22.157276  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:22.416690  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:22.602841  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:22.656074  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:22.916819  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:23.099870  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:23.155867  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:23.416625  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:23.600533  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:23.656454  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:23.917015  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:24.100555  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:24.158232  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:24.416440  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:24.601475  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:24.656912  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:24.916140  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:25.101234  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:25.156657  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:25.420006  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:25.600791  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:25.660299  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:25.916696  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:26.100219  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:26.156413  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:26.416604  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:26.601220  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:26.657280  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:26.916889  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:27.101279  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:27.156731  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:27.416264  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:27.607728  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:27.705438  836859 kapi.go:107] duration metric: took 1m39.052597178s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:18:27.916901  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:28.100487  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:28.416495  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:28.600754  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:28.915880  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:29.106130  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:29.416528  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:29.601354  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:29.916725  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:30.106247  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:30.416523  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:30.600874  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:30.917948  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:31.100835  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:31.416179  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:31.603432  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:31.917516  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:32.101375  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:32.416665  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:32.599718  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:32.916004  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:33.109123  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:33.417067  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:33.601079  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:33.916716  836859 kapi.go:107] duration metric: took 1m41.503917442s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:18:33.919683  836859 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-206214 cluster.
	I1018 12:18:33.922571  836859 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:18:33.924992  836859 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:18:34.100554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:34.603985  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:35.100727  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:35.600996  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:36.101004  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:36.601123  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:37.100154  836859 kapi.go:107] duration metric: took 1m48.003580896s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:18:38.985886  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:18:39.874975  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:18:39.875076  836859 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:18:39.878064  836859 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 12:18:39.880950  836859 addons.go:514] duration metric: took 1m58.249330303s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 12:18:39.881017  836859 start.go:246] waiting for cluster config update ...
	I1018 12:18:39.881040  836859 start.go:255] writing updated cluster config ...
	I1018 12:18:39.881372  836859 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:39.885425  836859 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:39.889573  836859 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnvks" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.894550  836859 pod_ready.go:94] pod "coredns-66bc5c9577-nnvks" is "Ready"
	I1018 12:18:39.894582  836859 pod_ready.go:86] duration metric: took 4.979486ms for pod "coredns-66bc5c9577-nnvks" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.897162  836859 pod_ready.go:83] waiting for pod "etcd-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.902321  836859 pod_ready.go:94] pod "etcd-addons-206214" is "Ready"
	I1018 12:18:39.902347  836859 pod_ready.go:86] duration metric: took 5.15581ms for pod "etcd-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.904846  836859 pod_ready.go:83] waiting for pod "kube-apiserver-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.910089  836859 pod_ready.go:94] pod "kube-apiserver-addons-206214" is "Ready"
	I1018 12:18:39.910118  836859 pod_ready.go:86] duration metric: took 5.243163ms for pod "kube-apiserver-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.912630  836859 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:40.290098  836859 pod_ready.go:94] pod "kube-controller-manager-addons-206214" is "Ready"
	I1018 12:18:40.290131  836859 pod_ready.go:86] duration metric: took 377.472411ms for pod "kube-controller-manager-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:40.489057  836859 pod_ready.go:83] waiting for pod "kube-proxy-hlgtx" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:40.889332  836859 pod_ready.go:94] pod "kube-proxy-hlgtx" is "Ready"
	I1018 12:18:40.889363  836859 pod_ready.go:86] duration metric: took 400.277147ms for pod "kube-proxy-hlgtx" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:41.089627  836859 pod_ready.go:83] waiting for pod "kube-scheduler-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:41.491174  836859 pod_ready.go:94] pod "kube-scheduler-addons-206214" is "Ready"
	I1018 12:18:41.491208  836859 pod_ready.go:86] duration metric: took 401.551929ms for pod "kube-scheduler-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:41.491232  836859 pod_ready.go:40] duration metric: took 1.60577079s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:41.554727  836859 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:18:41.558191  836859 out.go:179] * Done! kubectl is now configured to use "addons-206214" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:21:52 addons-206214 crio[832]: time="2025-10-18T12:21:52.533678153Z" level=info msg="Removed container 39e87627e065b036267e88b132090d9c13b1d3d7bb9f2520ede54cbae2502bcb: kube-system/registry-creds-764b6fb674-46n6w/registry-creds" id=f6236993-80d3-4928-8849-fe505b2e2a4c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.592660696Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-7vr78/POD" id=779061f8-c1e6-4caa-9afd-ecc8bd9e6e0a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.592729941Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.603711973Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-7vr78 Namespace:default ID:974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907 UID:400cfa0f-22f4-4869-a91d-f20cd147a7e2 NetNS:/var/run/netns/0aa454b6-1647-4974-8154-c07124495e4c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002e13a20}] Aliases:map[]}"
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.603879541Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-7vr78 to CNI network \"kindnet\" (type=ptp)"
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.616478704Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-7vr78 Namespace:default ID:974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907 UID:400cfa0f-22f4-4869-a91d-f20cd147a7e2 NetNS:/var/run/netns/0aa454b6-1647-4974-8154-c07124495e4c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002e13a20}] Aliases:map[]}"
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.616810171Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-7vr78 for CNI network kindnet (type=ptp)"
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.622726105Z" level=info msg="Ran pod sandbox 974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907 with infra container: default/hello-world-app-5d498dc89-7vr78/POD" id=779061f8-c1e6-4caa-9afd-ecc8bd9e6e0a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.624256966Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5a36874b-8b1e-473b-bbb3-8a85e2287b67 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.624483266Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5a36874b-8b1e-473b-bbb3-8a85e2287b67 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.624583395Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=5a36874b-8b1e-473b-bbb3-8a85e2287b67 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.625621457Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7cb7730d-535b-4abe-beba-33a36e19cea0 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:22:03 addons-206214 crio[832]: time="2025-10-18T12:22:03.627411456Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.227770198Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=7cb7730d-535b-4abe-beba-33a36e19cea0 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.229273456Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bac02f37-ac7c-4779-9d2c-de99054d0205 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.233362319Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3212a503-4867-4d9f-a543-6364b666cf7c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.248497076Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-7vr78/hello-world-app" id=abc1a3b3-f3ea-44d7-ac64-5191a599ba05 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.24943473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.264590295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.265220458Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f6fe8a16fe6550d0d6388034533fd5234d570f3b4f05e5721b95bf6fe570c174/merged/etc/passwd: no such file or directory"
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.265257554Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f6fe8a16fe6550d0d6388034533fd5234d570f3b4f05e5721b95bf6fe570c174/merged/etc/group: no such file or directory"
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.265882884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.292934883Z" level=info msg="Created container 69abf9212a5a13d3f514fb453fbf356bcea726946831245a84c99ca974c1e825: default/hello-world-app-5d498dc89-7vr78/hello-world-app" id=abc1a3b3-f3ea-44d7-ac64-5191a599ba05 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.295932333Z" level=info msg="Starting container: 69abf9212a5a13d3f514fb453fbf356bcea726946831245a84c99ca974c1e825" id=c63ec1f9-19c0-4cf9-b24e-29b11c58c9c8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:22:04 addons-206214 crio[832]: time="2025-10-18T12:22:04.302082821Z" level=info msg="Started container" PID=7327 containerID=69abf9212a5a13d3f514fb453fbf356bcea726946831245a84c99ca974c1e825 description=default/hello-world-app-5d498dc89-7vr78/hello-world-app id=c63ec1f9-19c0-4cf9-b24e-29b11c58c9c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	69abf9212a5a1       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   974a6267ccab3       hello-world-app-5d498dc89-7vr78             default
	c78c97ffddb35       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             13 seconds ago           Exited              registry-creds                           2                   6ba932969fa76       registry-creds-764b6fb674-46n6w             kube-system
	f4cad083ae0d3       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   831d67b2a1692       nginx                                       default
	faa7970b253d6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   6ce195f4267a8       busybox                                     default
	5b76cd93740ab       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   cf8219069e293       csi-hostpathplugin-sx7b6                    kube-system
	98f3833f9be11       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   cf8219069e293       csi-hostpathplugin-sx7b6                    kube-system
	c3e4ce21efe38       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   83f312298a7ea       gcp-auth-78565c9fb4-rc4zx                   gcp-auth
	45adaaa4d7905       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   cf8219069e293       csi-hostpathplugin-sx7b6                    kube-system
	f5ac90f527a67       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   cf8219069e293       csi-hostpathplugin-sx7b6                    kube-system
	5dc40e4564be4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   cf8219069e293       csi-hostpathplugin-sx7b6                    kube-system
	a121a292ddfcf       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   fccc6252764c4       ingress-nginx-controller-675c5ddd98-jkzpm   ingress-nginx
	29a93fedb418d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   d9a70a7a0cd7a       gadget-798dm                                gadget
	a32692f08d633       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   cf8219069e293       csi-hostpathplugin-sx7b6                    kube-system
	16bf9cff88592       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   26266d929a6f5       registry-proxy-cxqbx                        kube-system
	296399ec57fb6       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   e18fa2b26589e       nvidia-device-plugin-daemonset-k8hvk        kube-system
	6b1084a290aa4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   5ba43cb93128d       local-path-provisioner-648f6765c9-n22lq     local-path-storage
	f4e6a924c7832       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   a2bf968d722ca       yakd-dashboard-5ff678cb9-8zhf4              yakd-dashboard
	6ce61cd446801       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   dddad451e3153       csi-hostpath-resizer-0                      kube-system
	514d718d40ef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   0c9c66018b01d       snapshot-controller-7d9fbc56b8-sc8l2        kube-system
	a52b4e8e9dff8       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             4 minutes ago            Exited              patch                                    1                   be10c9c70c060       ingress-nginx-admission-patch-qtz2v         ingress-nginx
	d130ef4648a79       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   c314f96a1f14b       ingress-nginx-admission-create-v7rd7        ingress-nginx
	afeb96d141fb8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   0669a4b5f4464       cloud-spanner-emulator-86bd5cbb97-xt4gl     default
	1f1880b904fc1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   93010ddc3d3d7       snapshot-controller-7d9fbc56b8-fp5gt        kube-system
	119f93a0bf370       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   c97eafb09d813       registry-6b586f9694-mvmwh                   kube-system
	640a2e84493b8       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   2a92c4c6f5730       kube-ingress-dns-minikube                   kube-system
	b417690dc2872       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   5106b0becca76       csi-hostpath-attacher-0                     kube-system
	bc47b235de19a       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   96e3fe32f5f56       metrics-server-85b7d694d7-lxg99             kube-system
	0647083a60005       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   74a902c5b35bb       storage-provisioner                         kube-system
	7f3683b181a0b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   9df4efdd993cb       coredns-66bc5c9577-nnvks                    kube-system
	0cb48535119c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   d8413adfca4ec       kindnet-l2ffr                               kube-system
	58409db23c34e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   bd119c0250b10       kube-proxy-hlgtx                            kube-system
	6db03b7b7dbcb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   2a4c8bd604166       kube-scheduler-addons-206214                kube-system
	4db50608b742d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   cc962ef98b3ed       kube-controller-manager-addons-206214       kube-system
	cf0330eac63a5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   6cf2de3a09eb9       kube-apiserver-addons-206214                kube-system
	e5013ec0caf4e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   89885d36f8dd4       etcd-addons-206214                          kube-system
	
	
	==> coredns [7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15] <==
	[INFO] 10.244.0.12:53934 - 42406 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00203406s
	[INFO] 10.244.0.12:53934 - 11560 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000110197s
	[INFO] 10.244.0.12:53934 - 34049 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000276747s
	[INFO] 10.244.0.12:36728 - 8872 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000173828s
	[INFO] 10.244.0.12:36728 - 8683 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071542s
	[INFO] 10.244.0.12:43030 - 46603 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081683s
	[INFO] 10.244.0.12:43030 - 46398 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000209448s
	[INFO] 10.244.0.12:46958 - 36915 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122677s
	[INFO] 10.244.0.12:46958 - 36727 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101269s
	[INFO] 10.244.0.12:37987 - 36155 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001547677s
	[INFO] 10.244.0.12:37987 - 36352 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001652615s
	[INFO] 10.244.0.12:53807 - 45531 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114891s
	[INFO] 10.244.0.12:53807 - 45152 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085974s
	[INFO] 10.244.0.21:48355 - 12811 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000141738s
	[INFO] 10.244.0.21:48320 - 54814 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002510786s
	[INFO] 10.244.0.21:53079 - 61674 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164893s
	[INFO] 10.244.0.21:53967 - 7932 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094418s
	[INFO] 10.244.0.21:58233 - 5746 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131957s
	[INFO] 10.244.0.21:34356 - 53882 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009624s
	[INFO] 10.244.0.21:43007 - 27275 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002353179s
	[INFO] 10.244.0.21:53353 - 18185 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00556325s
	[INFO] 10.244.0.21:49126 - 11639 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002921393s
	[INFO] 10.244.0.21:35936 - 14901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003103755s
	[INFO] 10.244.0.23:38492 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021565s
	[INFO] 10.244.0.23:58537 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145833s
	
	
	==> describe nodes <==
	Name:               addons-206214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-206214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-206214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-206214
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-206214"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-206214
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:22:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:21:52 +0000   Sat, 18 Oct 2025 12:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:21:52 +0000   Sat, 18 Oct 2025 12:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:21:52 +0000   Sat, 18 Oct 2025 12:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:21:52 +0000   Sat, 18 Oct 2025 12:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-206214
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                48fd73e9-b11f-46d2-a783-76daabc219c5
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     cloud-spanner-emulator-86bd5cbb97-xt4gl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     hello-world-app-5d498dc89-7vr78              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-798dm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  gcp-auth                    gcp-auth-78565c9fb4-rc4zx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jkzpm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m17s
	  kube-system                 coredns-66bc5c9577-nnvks                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m23s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 csi-hostpathplugin-sx7b6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 etcd-addons-206214                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m29s
	  kube-system                 kindnet-l2ffr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m23s
	  kube-system                 kube-apiserver-addons-206214                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-controller-manager-addons-206214        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-hlgtx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-scheduler-addons-206214                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 metrics-server-85b7d694d7-lxg99              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m19s
	  kube-system                 nvidia-device-plugin-daemonset-k8hvk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-6b586f9694-mvmwh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 registry-creds-764b6fb674-46n6w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 registry-proxy-cxqbx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-fp5gt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-sc8l2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  local-path-storage          local-path-provisioner-648f6765c9-n22lq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8zhf4               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m21s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node addons-206214 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node addons-206214 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m37s (x8 over 5m37s)  kubelet          Node addons-206214 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m28s                  kubelet          Node addons-206214 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m28s                  kubelet          Node addons-206214 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m28s                  kubelet          Node addons-206214 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m24s                  node-controller  Node addons-206214 event: Registered Node addons-206214 in Controller
	  Normal   NodeReady                4m42s                  kubelet          Node addons-206214 status is now: NodeReady
	
	
	==> dmesg <==
	[ +18.372160] overlayfs: idmapped layers are currently not supported
	[Oct18 10:49] overlayfs: idmapped layers are currently not supported
	[Oct18 10:50] overlayfs: idmapped layers are currently not supported
	[Oct18 10:51] overlayfs: idmapped layers are currently not supported
	[ +26.703285] overlayfs: idmapped layers are currently not supported
	[Oct18 10:52] overlayfs: idmapped layers are currently not supported
	[Oct18 10:53] overlayfs: idmapped layers are currently not supported
	[Oct18 10:54] overlayfs: idmapped layers are currently not supported
	[ +42.459395] overlayfs: idmapped layers are currently not supported
	[  +0.085900] overlayfs: idmapped layers are currently not supported
	[Oct18 10:56] overlayfs: idmapped layers are currently not supported
	[ +18.116656] overlayfs: idmapped layers are currently not supported
	[Oct18 10:58] overlayfs: idmapped layers are currently not supported
	[  +3.156194] overlayfs: idmapped layers are currently not supported
	[Oct18 11:00] overlayfs: idmapped layers are currently not supported
	[Oct18 11:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:22] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=00000000f5b34d7b
	[  +0.001120] FS-Cache: O-key=[10] '34323937363632323639'
	[  +0.000787] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=00000000204faf8b
	[  +0.001107] FS-Cache: N-key=[10] '34323937363632323639'
	[Oct18 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 12:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608] <==
	{"level":"warn","ts":"2025-10-18T12:16:31.536437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.552135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.565873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.600274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.623936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.650859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.713752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.731238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.744160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.810729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.816466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.848274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.867516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.901222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.937712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.990322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:32.024030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:32.048499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:32.257735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:49.419791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:49.435950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.397384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.419959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.473001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.491637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51136","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c3e4ce21efe3844f94e7dab5975609f7ccbfa7d1d7a51738baf08d96acf21a3d] <==
	2025/10/18 12:18:33 GCP Auth Webhook started!
	2025/10/18 12:18:42 Ready to marshal response ...
	2025/10/18 12:18:42 Ready to write response ...
	2025/10/18 12:18:42 Ready to marshal response ...
	2025/10/18 12:18:42 Ready to write response ...
	2025/10/18 12:18:42 Ready to marshal response ...
	2025/10/18 12:18:42 Ready to write response ...
	2025/10/18 12:19:01 Ready to marshal response ...
	2025/10/18 12:19:01 Ready to write response ...
	2025/10/18 12:19:11 Ready to marshal response ...
	2025/10/18 12:19:11 Ready to write response ...
	2025/10/18 12:19:12 Ready to marshal response ...
	2025/10/18 12:19:12 Ready to write response ...
	2025/10/18 12:19:17 Ready to marshal response ...
	2025/10/18 12:19:17 Ready to write response ...
	2025/10/18 12:19:20 Ready to marshal response ...
	2025/10/18 12:19:20 Ready to write response ...
	2025/10/18 12:19:35 Ready to marshal response ...
	2025/10/18 12:19:35 Ready to write response ...
	2025/10/18 12:19:42 Ready to marshal response ...
	2025/10/18 12:19:42 Ready to write response ...
	2025/10/18 12:22:03 Ready to marshal response ...
	2025/10/18 12:22:03 Ready to write response ...
	
	
	==> kernel <==
	 12:22:05 up  4:04,  0 user,  load average: 1.11, 2.21, 3.01
	Linux addons-206214 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416] <==
	I1018 12:20:03.430225       1 main.go:301] handling current node
	I1018 12:20:13.421959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:20:13.421996       1 main.go:301] handling current node
	I1018 12:20:23.427370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:20:23.427406       1 main.go:301] handling current node
	I1018 12:20:33.430639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:20:33.430675       1 main.go:301] handling current node
	I1018 12:20:43.430324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:20:43.430354       1 main.go:301] handling current node
	I1018 12:20:53.427734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:20:53.427771       1 main.go:301] handling current node
	I1018 12:21:03.429316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:21:03.429351       1 main.go:301] handling current node
	I1018 12:21:13.431018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:21:13.431247       1 main.go:301] handling current node
	I1018 12:21:23.427387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:21:23.427517       1 main.go:301] handling current node
	I1018 12:21:33.429769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:21:33.429809       1 main.go:301] handling current node
	I1018 12:21:43.423027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:21:43.423076       1 main.go:301] handling current node
	I1018 12:21:53.423010       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:21:53.423044       1 main.go:301] handling current node
	I1018 12:22:03.422299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:22:03.422383       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2] <==
	W1018 12:17:11.416957       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 12:17:11.462094       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 12:17:11.486802       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 12:17:23.838420       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.203.81:443: connect: connection refused
	E1018 12:17:23.838505       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.203.81:443: connect: connection refused" logger="UnhandledError"
	W1018 12:17:23.842269       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.203.81:443: connect: connection refused
	E1018 12:17:23.842312       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.203.81:443: connect: connection refused" logger="UnhandledError"
	W1018 12:17:23.941066       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.203.81:443: connect: connection refused
	E1018 12:17:23.941114       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.203.81:443: connect: connection refused" logger="UnhandledError"
	W1018 12:17:39.274153       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:17:39.274222       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 12:17:39.275478       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.21.93:443: connect: connection refused" logger="UnhandledError"
	E1018 12:17:39.276379       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.21.93:443: connect: connection refused" logger="UnhandledError"
	E1018 12:17:39.281482       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.21.93:443: connect: connection refused" logger="UnhandledError"
	I1018 12:17:39.445465       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 12:18:50.635868       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43876: use of closed network connection
	E1018 12:18:50.857466       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43902: use of closed network connection
	I1018 12:19:29.196159       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 12:19:42.246058       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 12:19:42.553795       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.235.78"}
	E1018 12:19:43.032674       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1018 12:22:03.438911       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.207.114"}
	
	
	==> kube-controller-manager [4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b] <==
	I1018 12:16:41.405782       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 12:16:41.410996       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:16:41.418201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:16:41.418229       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:16:41.418237       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:16:41.426985       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:16:41.427483       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:16:41.427716       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:16:41.430871       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:16:41.434368       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:16:41.434440       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:16:41.434471       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:16:41.434476       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:16:41.434485       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:16:41.438586       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:16:41.443974       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-206214" podCIDRs=["10.244.0.0/24"]
	E1018 12:16:46.562503       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 12:17:11.389517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 12:17:11.389664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 12:17:11.389714       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 12:17:11.429761       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 12:17:11.436887       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 12:17:11.490187       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:11.537951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:26.355100       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f] <==
	I1018 12:16:43.257350       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:16:43.361288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:16:43.462264       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:16:43.462302       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:16:43.462379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:16:43.506945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:16:43.507007       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:16:43.608898       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:16:43.609296       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:16:43.609320       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:43.615194       1 config.go:200] "Starting service config controller"
	I1018 12:16:43.615215       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:16:43.615232       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:16:43.615237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:16:43.615248       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:16:43.615252       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:16:43.617307       1 config.go:309] "Starting node config controller"
	I1018 12:16:43.617321       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:16:43.617328       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:16:43.719062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:16:43.719103       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:16:43.719151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001] <==
	I1018 12:16:35.017517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:35.022129       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:16:35.022241       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 12:16:35.025861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:16:35.026349       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:16:35.026760       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:16:35.033338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:16:35.038911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:35.039132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:16:35.040768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:35.043248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:16:35.043504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:16:35.043587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:16:35.044270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:35.044374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:16:35.044455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:35.044547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:16:35.044677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:35.044793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:35.044868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:16:35.045031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:16:35.047105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:16:35.047281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:35.047421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1018 12:16:36.023192       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:21:34 addons-206214 kubelet[1279]: W1018 12:21:34.299773    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/crio-6ba932969fa763f5a8cefc697d2d84ea8301f78e7e7e1f9d65d76ef18268fe67 WatchSource:0}: Error finding container 6ba932969fa763f5a8cefc697d2d84ea8301f78e7e7e1f9d65d76ef18268fe67: Status 404 returned error can't find the container with id 6ba932969fa763f5a8cefc697d2d84ea8301f78e7e7e1f9d65d76ef18268fe67
	Oct 18 12:21:36 addons-206214 kubelet[1279]: I1018 12:21:36.428496    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-46n6w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:21:36 addons-206214 kubelet[1279]: I1018 12:21:36.428554    1279 scope.go:117] "RemoveContainer" containerID="abbbeb2f1977ae06304e97461eaa36ba2520d099b6ac3b5f7f4aad6a584f187f"
	Oct 18 12:21:36 addons-206214 kubelet[1279]: I1018 12:21:36.461257    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=111.333152681 podStartE2EDuration="1m54.461228027s" podCreationTimestamp="2025-10-18 12:19:42 +0000 UTC" firstStartedPulling="2025-10-18 12:19:42.872824191 +0000 UTC m=+186.179415805" lastFinishedPulling="2025-10-18 12:19:46.000899521 +0000 UTC m=+189.307491151" observedRunningTime="2025-10-18 12:19:47.050777742 +0000 UTC m=+190.357369364" watchObservedRunningTime="2025-10-18 12:21:36.461228027 +0000 UTC m=+299.767819641"
	Oct 18 12:21:37 addons-206214 kubelet[1279]: I1018 12:21:37.078925    1279 scope.go:117] "RemoveContainer" containerID="abbbeb2f1977ae06304e97461eaa36ba2520d099b6ac3b5f7f4aad6a584f187f"
	Oct 18 12:21:37 addons-206214 kubelet[1279]: I1018 12:21:37.433991    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-46n6w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:21:37 addons-206214 kubelet[1279]: I1018 12:21:37.434470    1279 scope.go:117] "RemoveContainer" containerID="39e87627e065b036267e88b132090d9c13b1d3d7bb9f2520ede54cbae2502bcb"
	Oct 18 12:21:37 addons-206214 kubelet[1279]: E1018 12:21:37.434701    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-46n6w_kube-system(42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3)\"" pod="kube-system/registry-creds-764b6fb674-46n6w" podUID="42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3"
	Oct 18 12:21:38 addons-206214 kubelet[1279]: I1018 12:21:38.437395    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-46n6w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:21:38 addons-206214 kubelet[1279]: I1018 12:21:38.437460    1279 scope.go:117] "RemoveContainer" containerID="39e87627e065b036267e88b132090d9c13b1d3d7bb9f2520ede54cbae2502bcb"
	Oct 18 12:21:38 addons-206214 kubelet[1279]: E1018 12:21:38.437613    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-46n6w_kube-system(42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3)\"" pod="kube-system/registry-creds-764b6fb674-46n6w" podUID="42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3"
	Oct 18 12:21:51 addons-206214 kubelet[1279]: I1018 12:21:51.868976    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-46n6w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:21:51 addons-206214 kubelet[1279]: I1018 12:21:51.869525    1279 scope.go:117] "RemoveContainer" containerID="39e87627e065b036267e88b132090d9c13b1d3d7bb9f2520ede54cbae2502bcb"
	Oct 18 12:21:52 addons-206214 kubelet[1279]: I1018 12:21:52.512096    1279 scope.go:117] "RemoveContainer" containerID="39e87627e065b036267e88b132090d9c13b1d3d7bb9f2520ede54cbae2502bcb"
	Oct 18 12:21:52 addons-206214 kubelet[1279]: I1018 12:21:52.512880    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-46n6w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:21:52 addons-206214 kubelet[1279]: I1018 12:21:52.513053    1279 scope.go:117] "RemoveContainer" containerID="c78c97ffddb352c074beba661de88f1c3b74990c87f70e9c3248a0538d75934b"
	Oct 18 12:21:52 addons-206214 kubelet[1279]: E1018 12:21:52.513323    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-46n6w_kube-system(42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3)\"" pod="kube-system/registry-creds-764b6fb674-46n6w" podUID="42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3"
	Oct 18 12:21:59 addons-206214 kubelet[1279]: I1018 12:21:59.868863    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-k8hvk" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:22:03 addons-206214 kubelet[1279]: I1018 12:22:03.362871    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/400cfa0f-22f4-4869-a91d-f20cd147a7e2-gcp-creds\") pod \"hello-world-app-5d498dc89-7vr78\" (UID: \"400cfa0f-22f4-4869-a91d-f20cd147a7e2\") " pod="default/hello-world-app-5d498dc89-7vr78"
	Oct 18 12:22:03 addons-206214 kubelet[1279]: I1018 12:22:03.362925    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlwn\" (UniqueName: \"kubernetes.io/projected/400cfa0f-22f4-4869-a91d-f20cd147a7e2-kube-api-access-qhlwn\") pod \"hello-world-app-5d498dc89-7vr78\" (UID: \"400cfa0f-22f4-4869-a91d-f20cd147a7e2\") " pod="default/hello-world-app-5d498dc89-7vr78"
	Oct 18 12:22:03 addons-206214 kubelet[1279]: W1018 12:22:03.619094    1279 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/crio-974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907 WatchSource:0}: Error finding container 974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907: Status 404 returned error can't find the container with id 974a6267ccab3026d621be3ac27b8e1b1a9cd3ad33b488054409962f656e8907
	Oct 18 12:22:04 addons-206214 kubelet[1279]: I1018 12:22:04.582624    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-7vr78" podStartSLOduration=0.97700392 podStartE2EDuration="1.582602885s" podCreationTimestamp="2025-10-18 12:22:03 +0000 UTC" firstStartedPulling="2025-10-18 12:22:03.624974752 +0000 UTC m=+326.931566366" lastFinishedPulling="2025-10-18 12:22:04.230573709 +0000 UTC m=+327.537165331" observedRunningTime="2025-10-18 12:22:04.58178762 +0000 UTC m=+327.888379242" watchObservedRunningTime="2025-10-18 12:22:04.582602885 +0000 UTC m=+327.889194498"
	Oct 18 12:22:04 addons-206214 kubelet[1279]: I1018 12:22:04.868458    1279 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-46n6w" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:22:04 addons-206214 kubelet[1279]: I1018 12:22:04.868529    1279 scope.go:117] "RemoveContainer" containerID="c78c97ffddb352c074beba661de88f1c3b74990c87f70e9c3248a0538d75934b"
	Oct 18 12:22:04 addons-206214 kubelet[1279]: E1018 12:22:04.868760    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-46n6w_kube-system(42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3)\"" pod="kube-system/registry-creds-764b6fb674-46n6w" podUID="42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3"
	
	
	==> storage-provisioner [0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39] <==
	W1018 12:21:40.501874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:42.504755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:42.512264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:44.515812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:44.522981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:46.526213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:46.530622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:48.533762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:48.540296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:50.543373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:50.548338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:52.554245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:52.563003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:54.566593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:54.571544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:56.575698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:56.583225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:58.586042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:21:58.590624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:22:00.593982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:22:00.601171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:22:02.605980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:22:02.612418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:22:04.615496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:22:04.621174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-206214 -n addons-206214
helpers_test.go:269: (dbg) Run:  kubectl --context addons-206214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-206214 describe pod ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-206214 describe pod ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v: exit status 1 (82.592495ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-v7rd7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qtz2v" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-206214 describe pod ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (276.085198ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:22:06.535897  846601 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:22:06.536720  846601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:22:06.536759  846601 out.go:374] Setting ErrFile to fd 2...
	I1018 12:22:06.536783  846601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:22:06.537094  846601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:22:06.537409  846601 mustload.go:65] Loading cluster: addons-206214
	I1018 12:22:06.537894  846601 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:22:06.537954  846601 addons.go:606] checking whether the cluster is paused
	I1018 12:22:06.538089  846601 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:22:06.538124  846601 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:22:06.538610  846601 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:22:06.556618  846601 ssh_runner.go:195] Run: systemctl --version
	I1018 12:22:06.556688  846601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:22:06.583861  846601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:22:06.698667  846601 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:22:06.698747  846601 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:22:06.729037  846601 cri.go:89] found id: "c78c97ffddb352c074beba661de88f1c3b74990c87f70e9c3248a0538d75934b"
	I1018 12:22:06.729056  846601 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:22:06.729060  846601 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:22:06.729064  846601 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:22:06.729067  846601 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:22:06.729070  846601 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:22:06.729073  846601 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:22:06.729076  846601 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:22:06.729080  846601 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:22:06.729089  846601 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:22:06.729092  846601 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:22:06.729095  846601 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:22:06.729098  846601 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:22:06.729101  846601 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:22:06.729104  846601 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:22:06.729115  846601 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:22:06.729118  846601 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:22:06.729123  846601 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:22:06.729126  846601 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:22:06.729129  846601 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:22:06.729133  846601 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:22:06.729136  846601 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:22:06.729139  846601 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:22:06.729142  846601 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:22:06.729145  846601 cri.go:89] found id: ""
	I1018 12:22:06.729201  846601 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:22:06.745326  846601 out.go:203] 
	W1018 12:22:06.748353  846601 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:22:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:22:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:22:06.748460  846601 out.go:285] * 
	* 
	W1018 12:22:06.754813  846601 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:22:06.757997  846601 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable ingress --alsologtostderr -v=1: exit status 11 (315.177171ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:22:06.856025  846714 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:22:06.857321  846714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:22:06.857386  846714 out.go:374] Setting ErrFile to fd 2...
	I1018 12:22:06.857407  846714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:22:06.857733  846714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:22:06.858086  846714 mustload.go:65] Loading cluster: addons-206214
	I1018 12:22:06.858516  846714 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:22:06.858566  846714 addons.go:606] checking whether the cluster is paused
	I1018 12:22:06.858710  846714 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:22:06.858740  846714 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:22:06.859231  846714 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:22:06.881688  846714 ssh_runner.go:195] Run: systemctl --version
	I1018 12:22:06.881747  846714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:22:06.900704  846714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:22:07.011387  846714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:22:07.011504  846714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:22:07.041262  846714 cri.go:89] found id: "c78c97ffddb352c074beba661de88f1c3b74990c87f70e9c3248a0538d75934b"
	I1018 12:22:07.041290  846714 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:22:07.041296  846714 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:22:07.041305  846714 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:22:07.041309  846714 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:22:07.041312  846714 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:22:07.041315  846714 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:22:07.041318  846714 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:22:07.041322  846714 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:22:07.041329  846714 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:22:07.041332  846714 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:22:07.041336  846714 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:22:07.041339  846714 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:22:07.041342  846714 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:22:07.041345  846714 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:22:07.041350  846714 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:22:07.041353  846714 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:22:07.041356  846714 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:22:07.041359  846714 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:22:07.041362  846714 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:22:07.041367  846714 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:22:07.041370  846714 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:22:07.041377  846714 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:22:07.041380  846714 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:22:07.041383  846714 cri.go:89] found id: ""
	I1018 12:22:07.041437  846714 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:22:07.061775  846714 out.go:203] 
	W1018 12:22:07.064830  846714 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:22:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:22:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:22:07.064863  846714 out.go:285] * 
	* 
	W1018 12:22:07.071213  846714 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:22:07.074298  846714 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-798dm" [46322094-46ac-49d4-b6ea-4333f2f14002] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005648183s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (288.266332ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:36.235568  844681 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:36.236455  844681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:36.236482  844681 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:36.236489  844681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:36.236833  844681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:36.237295  844681 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:36.237675  844681 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:36.237703  844681 addons.go:606] checking whether the cluster is paused
	I1018 12:19:36.237812  844681 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:36.237827  844681 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:36.238278  844681 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:36.261070  844681 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:36.261142  844681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:36.280332  844681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:36.386389  844681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:36.386546  844681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:36.418232  844681 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:36.418315  844681 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:36.418336  844681 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:36.418357  844681 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:36.418377  844681 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:36.418397  844681 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:36.418416  844681 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:36.418444  844681 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:36.418465  844681 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:36.418488  844681 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:36.418508  844681 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:36.418527  844681 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:36.418547  844681 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:36.418565  844681 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:36.418585  844681 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:36.418616  844681 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:36.418652  844681 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:36.418671  844681 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:36.418699  844681 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:36.418719  844681 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:36.418740  844681 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:36.418757  844681 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:36.418784  844681 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:36.418807  844681 cri.go:89] found id: ""
	I1018 12:19:36.418878  844681 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:36.435612  844681 out.go:203] 
	W1018 12:19:36.438593  844681 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:36.438627  844681 out.go:285] * 
	* 
	W1018 12:19:36.445035  844681 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:36.447936  844681 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.370897ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003616807s
addons_test.go:463: (dbg) Run:  kubectl --context addons-206214 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (311.091313ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:41.617191  844801 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:41.618522  844801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:41.618577  844801 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:41.618597  844801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:41.618932  844801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:41.619303  844801 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:41.619838  844801 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:41.619893  844801 addons.go:606] checking whether the cluster is paused
	I1018 12:19:41.620044  844801 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:41.620079  844801 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:41.620615  844801 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:41.638178  844801 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:41.638234  844801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:41.656694  844801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:41.766235  844801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:41.766323  844801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:41.831379  844801 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:41.831410  844801 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:41.831415  844801 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:41.831420  844801 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:41.831423  844801 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:41.831426  844801 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:41.831430  844801 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:41.831433  844801 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:41.831436  844801 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:41.831442  844801 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:41.831452  844801 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:41.831455  844801 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:41.831459  844801 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:41.831462  844801 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:41.831465  844801 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:41.831474  844801 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:41.831482  844801 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:41.831487  844801 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:41.831491  844801 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:41.831494  844801 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:41.831497  844801 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:41.831500  844801 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:41.831503  844801 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:41.831506  844801 cri.go:89] found id: ""
	I1018 12:19:41.831572  844801 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:41.855083  844801 out.go:203] 
	W1018 12:19:41.857937  844801 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:41.857968  844801 out.go:285] * 
	* 
	W1018 12:19:41.864566  844801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:41.868166  844801 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 12:18:57.539960  836086 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 12:18:57.545181  836086 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 12:18:57.545209  836086 kapi.go:107] duration metric: took 5.262845ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.272839ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-206214 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-206214 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [28959dc7-c323-42c6-80b8-58c42f76dfad] Pending
helpers_test.go:352: "task-pv-pod" [28959dc7-c323-42c6-80b8-58c42f76dfad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [28959dc7-c323-42c6-80b8-58c42f76dfad] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.008427186s
addons_test.go:572: (dbg) Run:  kubectl --context addons-206214 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-206214 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-206214 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-206214 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-206214 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-206214 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-206214 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c69e7736-0a56-4b80-89cc-db67d0669b00] Pending
helpers_test.go:352: "task-pv-pod-restore" [c69e7736-0a56-4b80-89cc-db67d0669b00] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c69e7736-0a56-4b80-89cc-db67d0669b00] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010024849s
addons_test.go:614: (dbg) Run:  kubectl --context addons-206214 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-206214 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-206214 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (343.324828ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:43.515154  845010 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:43.516036  845010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:43.516055  845010 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:43.516060  845010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:43.516847  845010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:43.517495  845010 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:43.518160  845010 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:43.518196  845010 addons.go:606] checking whether the cluster is paused
	I1018 12:19:43.518318  845010 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:43.518333  845010 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:43.518808  845010 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:43.544646  845010 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:43.544726  845010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:43.567055  845010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:43.690569  845010 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:43.690663  845010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:43.735252  845010 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:43.735318  845010 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:43.735338  845010 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:43.735359  845010 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:43.735397  845010 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:43.735418  845010 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:43.735437  845010 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:43.735456  845010 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:43.735476  845010 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:43.735516  845010 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:43.735541  845010 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:43.735561  845010 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:43.735580  845010 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:43.735598  845010 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:43.735629  845010 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:43.735680  845010 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:43.735714  845010 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:43.735736  845010 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:43.735771  845010 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:43.735788  845010 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:43.735809  845010 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:43.735826  845010 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:43.735855  845010 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:43.735880  845010 cri.go:89] found id: ""
	I1018 12:19:43.735937  845010 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:43.760823  845010 out.go:203] 
	W1018 12:19:43.765544  845010 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:43.765575  845010 out.go:285] * 
	* 
	W1018 12:19:43.773804  845010 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:43.776986  845010 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (355.993398ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:43.869517  845107 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:43.871688  845107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:43.871710  845107 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:43.871743  845107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:43.872060  845107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:43.872386  845107 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:43.872844  845107 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:43.872877  845107 addons.go:606] checking whether the cluster is paused
	I1018 12:19:43.872993  845107 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:43.873008  845107 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:43.873541  845107 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:43.903177  845107 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:43.903240  845107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:43.936442  845107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:44.044669  845107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:44.044783  845107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:44.091027  845107 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:44.091053  845107 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:44.091060  845107 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:44.091065  845107 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:44.091068  845107 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:44.091072  845107 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:44.091076  845107 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:44.091079  845107 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:44.091082  845107 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:44.091090  845107 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:44.091094  845107 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:44.091097  845107 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:44.091101  845107 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:44.091104  845107 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:44.091107  845107 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:44.091118  845107 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:44.091122  845107 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:44.091127  845107 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:44.091130  845107 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:44.091133  845107 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:44.091139  845107 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:44.091142  845107 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:44.091145  845107 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:44.091156  845107 cri.go:89] found id: ""
	I1018 12:19:44.091206  845107 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:44.119839  845107 out.go:203] 
	W1018 12:19:44.123294  845107 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:44.123336  845107 out.go:285] * 
	* 
	W1018 12:19:44.129968  845107 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:44.133039  845107 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-206214 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-206214 --alsologtostderr -v=1: exit status 11 (292.713604ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:26.737778  844060 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:26.738499  844060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:26.738516  844060 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:26.738523  844060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:26.738784  844060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:26.739083  844060 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:26.739442  844060 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:26.739469  844060 addons.go:606] checking whether the cluster is paused
	I1018 12:19:26.739572  844060 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:26.739589  844060 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:26.740096  844060 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:26.758489  844060 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:26.758601  844060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:26.785123  844060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:26.894840  844060 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:26.894925  844060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:26.929518  844060 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:26.929544  844060 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:26.929549  844060 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:26.929553  844060 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:26.929562  844060 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:26.929566  844060 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:26.929570  844060 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:26.929573  844060 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:26.929576  844060 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:26.929582  844060 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:26.929585  844060 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:26.929589  844060 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:26.929593  844060 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:26.929596  844060 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:26.929600  844060 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:26.929605  844060 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:26.929612  844060 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:26.929616  844060 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:26.929621  844060 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:26.929624  844060 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:26.929629  844060 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:26.929635  844060 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:26.929639  844060 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:26.929642  844060 cri.go:89] found id: ""
	I1018 12:19:26.929691  844060 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:26.944495  844060 out.go:203] 
	W1018 12:19:26.947505  844060 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:26.947530  844060 out.go:285] * 
	* 
	W1018 12:19:26.954049  844060 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:26.957104  844060 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-206214 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-206214
helpers_test.go:243: (dbg) docker inspect addons-206214:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f",
	        "Created": "2025-10-18T12:16:12.378611685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 837263,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:12.437120529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/hosts",
	        "LogPath": "/var/lib/docker/containers/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f/17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f-json.log",
	        "Name": "/addons-206214",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-206214:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-206214",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "17e1d1d7818dd36cbef0746b7ce5940b29cbb3bf61fa8da5a84acd73952b8f8f",
	                "LowerDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/304b17f3d40107924316cd6656eaf682fd04fd515c829c87447ad69800add7f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-206214",
	                "Source": "/var/lib/docker/volumes/addons-206214/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-206214",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-206214",
	                "name.minikube.sigs.k8s.io": "addons-206214",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f740c91afafc59e9248a84d737dbdca05e891463f6dfee035a60a805f126f8e",
	            "SandboxKey": "/var/run/docker/netns/8f740c91afaf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-206214": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:76:88:cd:71:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ff548bed0e14250dfa5ffdc0b374749a90eb9d54533761e2b63e7168112ae59",
	                    "EndpointID": "a33ad3dcd3a28fb0572474ed5a685d94d58128685a2318e4eb02dbb3280c000f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-206214",
	                        "17e1d1d7818d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-206214 -n addons-206214
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-206214 logs -n 25: (1.564450782s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-019533 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-019533   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ delete  │ -p download-only-019533                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-019533   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ start   │ -o=json --download-only -p download-only-794243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-794243   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ delete  │ -p download-only-794243                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-794243   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ delete  │ -p download-only-019533                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-019533   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ delete  │ -p download-only-794243                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-794243   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ start   │ --download-only -p download-docker-581361 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-581361 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ delete  │ -p download-docker-581361                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-581361 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ start   │ --download-only -p binary-mirror-959514 --alsologtostderr --binary-mirror http://127.0.0.1:36463 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-959514   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ delete  │ -p binary-mirror-959514                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-959514   │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ addons  │ enable dashboard -p addons-206214                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-206214                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ start   │ -p addons-206214 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:18 UTC │
	│ addons  │ addons-206214 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ addons-206214 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ addons  │ addons-206214 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ ip      │ addons-206214 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ addons-206214 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ ssh     │ addons-206214 ssh cat /opt/local-path-provisioner/pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │ 18 Oct 25 12:19 UTC │
	│ addons  │ addons-206214 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-206214 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	│ addons  │ enable headlamp -p addons-206214 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-206214          │ jenkins │ v1.37.0 │ 18 Oct 25 12:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:15:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:15:46.605994  836859 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:15:46.606162  836859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:46.606192  836859 out.go:374] Setting ErrFile to fd 2...
	I1018 12:15:46.606212  836859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:46.606842  836859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:15:46.607404  836859 out.go:368] Setting JSON to false
	I1018 12:15:46.608382  836859 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":14299,"bootTime":1760775448,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:15:46.608579  836859 start.go:141] virtualization:  
	I1018 12:15:46.663272  836859 out.go:179] * [addons-206214] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:15:46.696404  836859 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:15:46.696431  836859 notify.go:220] Checking for updates...
	I1018 12:15:46.760112  836859 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:15:46.791470  836859 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:15:46.808056  836859 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:15:46.840862  836859 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:15:46.873029  836859 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:15:46.905440  836859 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:15:46.931464  836859 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:15:46.931602  836859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:47.014191  836859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:15:46.994404725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:47.014311  836859 docker.go:318] overlay module found
	I1018 12:15:47.049561  836859 out.go:179] * Using the docker driver based on user configuration
	I1018 12:15:47.080960  836859 start.go:305] selected driver: docker
	I1018 12:15:47.080988  836859 start.go:925] validating driver "docker" against <nil>
	I1018 12:15:47.081004  836859 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:15:47.081786  836859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:47.140029  836859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:15:47.129650432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:47.140184  836859 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:15:47.140406  836859 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:15:47.176747  836859 out.go:179] * Using Docker driver with root privileges
	I1018 12:15:47.223561  836859 cni.go:84] Creating CNI manager for ""
	I1018 12:15:47.223645  836859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:15:47.223665  836859 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:15:47.223770  836859 start.go:349] cluster config:
	{Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 12:15:47.257792  836859 out.go:179] * Starting "addons-206214" primary control-plane node in "addons-206214" cluster
	I1018 12:15:47.290638  836859 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:15:47.322735  836859 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:15:47.353728  836859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:15:47.353820  836859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:15:47.353873  836859 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 12:15:47.353889  836859 cache.go:58] Caching tarball of preloaded images
	I1018 12:15:47.353971  836859 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:15:47.353980  836859 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:15:47.354308  836859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/config.json ...
	I1018 12:15:47.354327  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/config.json: {Name:mk339f447ad27da72d7095ab6ffb314a0c496a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:15:47.369413  836859 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:15:47.369569  836859 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:15:47.369589  836859 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 12:15:47.369594  836859 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 12:15:47.369601  836859 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 12:15:47.369606  836859 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:16:05.797936  836859 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:16:05.797976  836859 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:16:05.798016  836859 start.go:360] acquireMachinesLock for addons-206214: {Name:mk40010c192481362219c1375e984e4d3894f3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:16:05.798150  836859 start.go:364] duration metric: took 109.999µs to acquireMachinesLock for "addons-206214"
	I1018 12:16:05.798181  836859 start.go:93] Provisioning new machine with config: &{Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:16:05.798253  836859 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:16:05.801776  836859 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:16:05.802023  836859 start.go:159] libmachine.API.Create for "addons-206214" (driver="docker")
	I1018 12:16:05.802062  836859 client.go:168] LocalClient.Create starting
	I1018 12:16:05.802184  836859 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 12:16:06.078846  836859 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 12:16:06.745143  836859 cli_runner.go:164] Run: docker network inspect addons-206214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:16:06.761128  836859 cli_runner.go:211] docker network inspect addons-206214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:16:06.761214  836859 network_create.go:284] running [docker network inspect addons-206214] to gather additional debugging logs...
	I1018 12:16:06.761236  836859 cli_runner.go:164] Run: docker network inspect addons-206214
	W1018 12:16:06.776942  836859 cli_runner.go:211] docker network inspect addons-206214 returned with exit code 1
	I1018 12:16:06.776973  836859 network_create.go:287] error running [docker network inspect addons-206214]: docker network inspect addons-206214: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-206214 not found
	I1018 12:16:06.776988  836859 network_create.go:289] output of [docker network inspect addons-206214]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-206214 not found
	
	** /stderr **
	I1018 12:16:06.777105  836859 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:16:06.794146  836859 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c06510}
	I1018 12:16:06.794188  836859 network_create.go:124] attempt to create docker network addons-206214 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:16:06.794254  836859 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-206214 addons-206214
	I1018 12:16:06.849592  836859 network_create.go:108] docker network addons-206214 192.168.49.0/24 created
	I1018 12:16:06.849627  836859 kic.go:121] calculated static IP "192.168.49.2" for the "addons-206214" container
	I1018 12:16:06.849720  836859 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:16:06.865721  836859 cli_runner.go:164] Run: docker volume create addons-206214 --label name.minikube.sigs.k8s.io=addons-206214 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:16:06.884880  836859 oci.go:103] Successfully created a docker volume addons-206214
	I1018 12:16:06.884962  836859 cli_runner.go:164] Run: docker run --rm --name addons-206214-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-206214 --entrypoint /usr/bin/test -v addons-206214:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:16:07.918567  836859 cli_runner.go:217] Completed: docker run --rm --name addons-206214-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-206214 --entrypoint /usr/bin/test -v addons-206214:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (1.033566586s)
	I1018 12:16:07.918607  836859 oci.go:107] Successfully prepared a docker volume addons-206214
	I1018 12:16:07.918629  836859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:16:07.918648  836859 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:16:07.918730  836859 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-206214:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:16:12.309156  836859 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-206214:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.390386685s)
	I1018 12:16:12.309195  836859 kic.go:203] duration metric: took 4.390538966s to extract preloaded images to volume ...
	W1018 12:16:12.309357  836859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:16:12.309475  836859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:16:12.363707  836859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-206214 --name addons-206214 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-206214 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-206214 --network addons-206214 --ip 192.168.49.2 --volume addons-206214:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:16:12.650699  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Running}}
	I1018 12:16:12.676087  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:12.696375  836859 cli_runner.go:164] Run: docker exec addons-206214 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:16:12.745421  836859 oci.go:144] the created container "addons-206214" has a running status.
	I1018 12:16:12.745448  836859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa...
	I1018 12:16:13.473630  836859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:16:13.507100  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:13.528698  836859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:16:13.528720  836859 kic_runner.go:114] Args: [docker exec --privileged addons-206214 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:16:13.575509  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:13.595212  836859 machine.go:93] provisionDockerMachine start ...
	I1018 12:16:13.595313  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:13.616215  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:13.616535  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:13.616544  836859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:16:13.772140  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-206214
	
	I1018 12:16:13.772167  836859 ubuntu.go:182] provisioning hostname "addons-206214"
	I1018 12:16:13.772234  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:13.791603  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:13.792253  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:13.792269  836859 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-206214 && echo "addons-206214" | sudo tee /etc/hostname
	I1018 12:16:13.953348  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-206214
	
	I1018 12:16:13.953445  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:13.970560  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:13.970874  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:13.970898  836859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-206214' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-206214/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-206214' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:16:14.120024  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:16:14.120055  836859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:16:14.120076  836859 ubuntu.go:190] setting up certificates
	I1018 12:16:14.120085  836859 provision.go:84] configureAuth start
	I1018 12:16:14.120157  836859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-206214
	I1018 12:16:14.137327  836859 provision.go:143] copyHostCerts
	I1018 12:16:14.137412  836859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:16:14.137559  836859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:16:14.137624  836859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:16:14.137680  836859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.addons-206214 san=[127.0.0.1 192.168.49.2 addons-206214 localhost minikube]
	I1018 12:16:14.630678  836859 provision.go:177] copyRemoteCerts
	I1018 12:16:14.630753  836859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:16:14.630795  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:14.648200  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:14.751396  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:16:14.769011  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:16:14.786454  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:16:14.803476  836859 provision.go:87] duration metric: took 683.366326ms to configureAuth
	I1018 12:16:14.803506  836859 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:16:14.803805  836859 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:16:14.803919  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:14.820973  836859 main.go:141] libmachine: Using SSH client type: native
	I1018 12:16:14.821271  836859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33877 <nil> <nil>}
	I1018 12:16:14.821293  836859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:16:15.097143  836859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:16:15.097170  836859 machine.go:96] duration metric: took 1.501937957s to provisionDockerMachine
	I1018 12:16:15.097181  836859 client.go:171] duration metric: took 9.29510931s to LocalClient.Create
	I1018 12:16:15.097226  836859 start.go:167] duration metric: took 9.295203234s to libmachine.API.Create "addons-206214"
	I1018 12:16:15.097247  836859 start.go:293] postStartSetup for "addons-206214" (driver="docker")
	I1018 12:16:15.097259  836859 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:16:15.097350  836859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:16:15.097422  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.117154  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.219990  836859 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:16:15.223395  836859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:16:15.223425  836859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:16:15.223436  836859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:16:15.223509  836859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:16:15.223532  836859 start.go:296] duration metric: took 126.277563ms for postStartSetup
	I1018 12:16:15.223874  836859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-206214
	I1018 12:16:15.240720  836859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/config.json ...
	I1018 12:16:15.241011  836859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:16:15.241062  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.258024  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.356845  836859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:16:15.361505  836859 start.go:128] duration metric: took 9.563234821s to createHost
	I1018 12:16:15.361527  836859 start.go:83] releasing machines lock for "addons-206214", held for 9.563364537s
	I1018 12:16:15.361598  836859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-206214
	I1018 12:16:15.378319  836859 ssh_runner.go:195] Run: cat /version.json
	I1018 12:16:15.378345  836859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:16:15.378372  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.378408  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:15.399804  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.400349  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:15.499436  836859 ssh_runner.go:195] Run: systemctl --version
	I1018 12:16:15.593950  836859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:16:15.630081  836859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:16:15.634577  836859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:16:15.634717  836859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:16:15.665383  836859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:16:15.665412  836859 start.go:495] detecting cgroup driver to use...
	I1018 12:16:15.665456  836859 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:16:15.665518  836859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:16:15.682974  836859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:16:15.695723  836859 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:16:15.695787  836859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:16:15.713016  836859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:16:15.731429  836859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:16:15.861253  836859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:16:16.031748  836859 docker.go:234] disabling docker service ...
	I1018 12:16:16.031854  836859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:16:16.061062  836859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:16:16.079489  836859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:16:16.205229  836859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:16:16.326235  836859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:16:16.339321  836859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:16:16.353029  836859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:16:16.353098  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.361953  836859 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:16:16.362066  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.371472  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.380345  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.389519  836859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:16:16.397712  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.406303  836859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.420325  836859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:16:16.429454  836859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:16:16.437568  836859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:16:16.445394  836859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:16:16.568729  836859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:16:16.693310  836859 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:16:16.693406  836859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:16:16.697284  836859 start.go:563] Will wait 60s for crictl version
	I1018 12:16:16.697398  836859 ssh_runner.go:195] Run: which crictl
	I1018 12:16:16.701252  836859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:16:16.725129  836859 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:16:16.725303  836859 ssh_runner.go:195] Run: crio --version
	I1018 12:16:16.757209  836859 ssh_runner.go:195] Run: crio --version
	I1018 12:16:16.790620  836859 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:16:16.793427  836859 cli_runner.go:164] Run: docker network inspect addons-206214 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:16:16.810823  836859 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:16:16.814717  836859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:16:16.825462  836859 kubeadm.go:883] updating cluster {Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:16:16.825584  836859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:16:16.825644  836859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:16:16.863359  836859 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:16:16.863385  836859 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:16:16.863450  836859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:16:16.889876  836859 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:16:16.889899  836859 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:16:16.889907  836859 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 12:16:16.890007  836859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-206214 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:16:16.890095  836859 ssh_runner.go:195] Run: crio config
	I1018 12:16:16.955218  836859 cni.go:84] Creating CNI manager for ""
	I1018 12:16:16.955243  836859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:16:16.955264  836859 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:16:16.955300  836859 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-206214 NodeName:addons-206214 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:16:16.955445  836859 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-206214"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:16:16.955538  836859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:16:16.963730  836859 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:16:16.963802  836859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:16:16.971715  836859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:16:16.985322  836859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:16:16.998332  836859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 12:16:17.013192  836859 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:16:17.017088  836859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:16:17.027441  836859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:16:17.136349  836859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:16:17.156230  836859 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214 for IP: 192.168.49.2
	I1018 12:16:17.156254  836859 certs.go:195] generating shared ca certs ...
	I1018 12:16:17.156279  836859 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:17.156413  836859 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:16:17.412844  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt ...
	I1018 12:16:17.412876  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt: {Name:mkc4b82375119f693df42479e770988d88209bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:17.413077  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key ...
	I1018 12:16:17.413091  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key: {Name:mk9dc014fc5eb975671220a3eb91be2810222359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:17.413181  836859 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:16:18.159062  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt ...
	I1018 12:16:18.159093  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt: {Name:mk8b05b47b979a21e25cd821712c7355198efc46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.159277  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key ...
	I1018 12:16:18.159291  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key: {Name:mk0a3c89ee9e87156cc868ddad1fe69147895d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.159378  836859 certs.go:257] generating profile certs ...
	I1018 12:16:18.159437  836859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.key
	I1018 12:16:18.159455  836859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt with IP's: []
	I1018 12:16:18.262762  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt ...
	I1018 12:16:18.262793  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: {Name:mk254d3cde411022409e72b75879c6d383301371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.262968  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.key ...
	I1018 12:16:18.262980  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.key: {Name:mk90e5a0c595911270645a3e5cb5dff0ed83334b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.263064  836859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a
	I1018 12:16:18.263084  836859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:16:18.494793  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a ...
	I1018 12:16:18.494827  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a: {Name:mk5d15bcfa121dc5f2850d18ad20cfda1c259aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.495027  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a ...
	I1018 12:16:18.495042  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a: {Name:mk31c1c9284ced2fdff5231fd7b185a244217b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:18.495131  836859 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt.46bde24a -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt
	I1018 12:16:18.495223  836859 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key.46bde24a -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key
	I1018 12:16:18.495278  836859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key
	I1018 12:16:18.495299  836859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt with IP's: []
	I1018 12:16:19.666883  836859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt ...
	I1018 12:16:19.666922  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt: {Name:mk1d0fa8d1a3516ad11b655da77daf84f8050b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:19.667120  836859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key ...
	I1018 12:16:19.667135  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key: {Name:mk1d26d66e7dfc8e55d2952c982f31454275e90d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:19.667331  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:16:19.667382  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:16:19.667406  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:16:19.667433  836859 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:16:19.668132  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:16:19.688296  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:16:19.707143  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:16:19.726119  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:16:19.744947  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:16:19.763365  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:16:19.781952  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:16:19.800361  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:16:19.818372  836859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:16:19.836972  836859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:16:19.850754  836859 ssh_runner.go:195] Run: openssl version
	I1018 12:16:19.857387  836859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:16:19.866328  836859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:16:19.870082  836859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:16:19.870152  836859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:16:19.913955  836859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:16:19.922851  836859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:16:19.926612  836859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:16:19.926663  836859 kubeadm.go:400] StartCluster: {Name:addons-206214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-206214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:16:19.926748  836859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:16:19.926814  836859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:16:19.957952  836859 cri.go:89] found id: ""
	I1018 12:16:19.958083  836859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:16:19.966161  836859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:16:19.974022  836859 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:16:19.974127  836859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:16:19.982173  836859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:16:19.982194  836859 kubeadm.go:157] found existing configuration files:
	
	I1018 12:16:19.982247  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:16:19.990159  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:16:19.990283  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:16:19.997760  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:16:20.015450  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:16:20.015582  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:16:20.024719  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:16:20.034056  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:16:20.034127  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:16:20.042848  836859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:16:20.051375  836859 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:16:20.051452  836859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:16:20.061563  836859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:16:20.103508  836859 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:16:20.103793  836859 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:16:20.143148  836859 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:16:20.143228  836859 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:16:20.143269  836859 kubeadm.go:318] OS: Linux
	I1018 12:16:20.143322  836859 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:16:20.143376  836859 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:16:20.143429  836859 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:16:20.143494  836859 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:16:20.143549  836859 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:16:20.143604  836859 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:16:20.143679  836859 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:16:20.143735  836859 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:16:20.143791  836859 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:16:20.216958  836859 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:16:20.217077  836859 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:16:20.217177  836859 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:16:20.226117  836859 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:16:20.232953  836859 out.go:252]   - Generating certificates and keys ...
	I1018 12:16:20.233068  836859 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:16:20.233149  836859 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:16:20.702595  836859 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:16:21.006042  836859 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:16:21.532208  836859 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:16:21.791820  836859 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:16:22.557644  836859 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:16:22.557794  836859 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-206214 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:16:22.897996  836859 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:16:22.898430  836859 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-206214 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:16:23.023531  836859 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:16:23.471833  836859 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:16:23.591234  836859 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:16:23.591629  836859 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:16:23.659827  836859 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:16:24.134316  836859 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:16:25.555504  836859 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:16:26.058709  836859 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:16:26.866925  836859 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:16:26.867730  836859 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:16:26.871057  836859 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:16:26.874419  836859 out.go:252]   - Booting up control plane ...
	I1018 12:16:26.874521  836859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:16:26.874603  836859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:16:26.875771  836859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:16:26.909645  836859 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:16:26.909922  836859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:16:26.917819  836859 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:16:26.918093  836859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:16:26.918282  836859 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:16:27.062107  836859 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:16:27.062237  836859 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:16:29.063901  836859 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001823146s
	I1018 12:16:29.067468  836859 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:16:29.067568  836859 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:16:29.067974  836859 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:16:29.068066  836859 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:16:35.064462  836859 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.996746998s
	I1018 12:16:35.351942  836859 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.283606895s
	I1018 12:16:36.069984  836859 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.002230901s
	I1018 12:16:36.090675  836859 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:16:36.108793  836859 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:16:36.126872  836859 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:16:36.127127  836859 kubeadm.go:318] [mark-control-plane] Marking the node addons-206214 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:16:36.139768  836859 kubeadm.go:318] [bootstrap-token] Using token: khshsh.o8s9b5n83lhecxu7
	I1018 12:16:36.142911  836859 out.go:252]   - Configuring RBAC rules ...
	I1018 12:16:36.143042  836859 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:16:36.147439  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:16:36.155525  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:16:36.161846  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:16:36.166413  836859 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:16:36.170771  836859 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:16:36.483846  836859 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:16:36.910200  836859 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:16:37.477268  836859 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:16:37.478716  836859 kubeadm.go:318] 
	I1018 12:16:37.478797  836859 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:16:37.478803  836859 kubeadm.go:318] 
	I1018 12:16:37.478883  836859 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:16:37.478888  836859 kubeadm.go:318] 
	I1018 12:16:37.478925  836859 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:16:37.479485  836859 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:16:37.479545  836859 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:16:37.479551  836859 kubeadm.go:318] 
	I1018 12:16:37.479607  836859 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:16:37.479612  836859 kubeadm.go:318] 
	I1018 12:16:37.479684  836859 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:16:37.479690  836859 kubeadm.go:318] 
	I1018 12:16:37.479744  836859 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:16:37.479821  836859 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:16:37.479892  836859 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:16:37.479896  836859 kubeadm.go:318] 
	I1018 12:16:37.479983  836859 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:16:37.480065  836859 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:16:37.480070  836859 kubeadm.go:318] 
	I1018 12:16:37.480157  836859 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token khshsh.o8s9b5n83lhecxu7 \
	I1018 12:16:37.480264  836859 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 12:16:37.480285  836859 kubeadm.go:318] 	--control-plane 
	I1018 12:16:37.480290  836859 kubeadm.go:318] 
	I1018 12:16:37.480378  836859 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:16:37.480383  836859 kubeadm.go:318] 
	I1018 12:16:37.480468  836859 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token khshsh.o8s9b5n83lhecxu7 \
	I1018 12:16:37.480574  836859 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 12:16:37.482970  836859 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:16:37.483201  836859 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:16:37.483309  836859 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:16:37.483326  836859 cni.go:84] Creating CNI manager for ""
	I1018 12:16:37.483335  836859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:16:37.488466  836859 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:16:37.491409  836859 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:16:37.495470  836859 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:16:37.495491  836859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:16:37.508209  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:16:37.795180  836859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:16:37.795326  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:37.795397  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-206214 minikube.k8s.io/updated_at=2025_10_18T12_16_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-206214 minikube.k8s.io/primary=true
	I1018 12:16:38.026370  836859 ops.go:34] apiserver oom_adj: -16
	I1018 12:16:38.026483  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:38.526604  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:39.026647  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:39.526601  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:40.028102  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:40.527002  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:41.027368  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:41.526890  836859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:16:41.630380  836859 kubeadm.go:1113] duration metric: took 3.835110806s to wait for elevateKubeSystemPrivileges
	I1018 12:16:41.630420  836859 kubeadm.go:402] duration metric: took 21.703762784s to StartCluster
	I1018 12:16:41.630449  836859 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:41.630575  836859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:16:41.630982  836859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:16:41.631187  836859 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:16:41.631334  836859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:16:41.631590  836859 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:16:41.631598  836859 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:16:41.631734  836859 addons.go:69] Setting yakd=true in profile "addons-206214"
	I1018 12:16:41.631749  836859 addons.go:238] Setting addon yakd=true in "addons-206214"
	I1018 12:16:41.631748  836859 addons.go:69] Setting inspektor-gadget=true in profile "addons-206214"
	I1018 12:16:41.631762  836859 addons.go:238] Setting addon inspektor-gadget=true in "addons-206214"
	I1018 12:16:41.631775  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.631782  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.632258  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.632297  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.632801  836859 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-206214"
	I1018 12:16:41.632821  836859 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-206214"
	I1018 12:16:41.632855  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.633264  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.636388  836859 addons.go:69] Setting cloud-spanner=true in profile "addons-206214"
	I1018 12:16:41.636432  836859 addons.go:238] Setting addon cloud-spanner=true in "addons-206214"
	I1018 12:16:41.636467  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.636922  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.637646  836859 addons.go:69] Setting metrics-server=true in profile "addons-206214"
	I1018 12:16:41.637710  836859 addons.go:238] Setting addon metrics-server=true in "addons-206214"
	I1018 12:16:41.637753  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.638283  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.640544  836859 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-206214"
	I1018 12:16:41.640610  836859 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-206214"
	I1018 12:16:41.640645  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.641096  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.647944  836859 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-206214"
	I1018 12:16:41.647980  836859 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-206214"
	I1018 12:16:41.648033  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.648498  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.653826  836859 addons.go:69] Setting default-storageclass=true in profile "addons-206214"
	I1018 12:16:41.653860  836859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-206214"
	I1018 12:16:41.654262  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.663080  836859 addons.go:69] Setting registry=true in profile "addons-206214"
	I1018 12:16:41.663109  836859 addons.go:238] Setting addon registry=true in "addons-206214"
	I1018 12:16:41.663154  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.663627  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.675680  836859 addons.go:69] Setting registry-creds=true in profile "addons-206214"
	I1018 12:16:41.675774  836859 addons.go:238] Setting addon registry-creds=true in "addons-206214"
	I1018 12:16:41.675846  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.677232  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.692133  836859 addons.go:69] Setting gcp-auth=true in profile "addons-206214"
	I1018 12:16:41.692226  836859 mustload.go:65] Loading cluster: addons-206214
	I1018 12:16:41.695328  836859 addons.go:69] Setting ingress=true in profile "addons-206214"
	I1018 12:16:41.695357  836859 addons.go:238] Setting addon ingress=true in "addons-206214"
	I1018 12:16:41.695401  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.695907  836859 addons.go:69] Setting storage-provisioner=true in profile "addons-206214"
	I1018 12:16:41.695926  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.696078  836859 addons.go:238] Setting addon storage-provisioner=true in "addons-206214"
	I1018 12:16:41.696127  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.696567  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.714712  836859 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-206214"
	I1018 12:16:41.714756  836859 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-206214"
	I1018 12:16:41.715112  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.719272  836859 addons.go:69] Setting ingress-dns=true in profile "addons-206214"
	I1018 12:16:41.719308  836859 addons.go:238] Setting addon ingress-dns=true in "addons-206214"
	I1018 12:16:41.719351  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.719827  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.734936  836859 addons.go:69] Setting volcano=true in profile "addons-206214"
	I1018 12:16:41.734975  836859 addons.go:238] Setting addon volcano=true in "addons-206214"
	I1018 12:16:41.735090  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.735562  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.744200  836859 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:16:41.747083  836859 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:16:41.747112  836859 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:16:41.747187  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.755227  836859 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:16:41.755537  836859 addons.go:69] Setting volumesnapshots=true in profile "addons-206214"
	I1018 12:16:41.755558  836859 addons.go:238] Setting addon volumesnapshots=true in "addons-206214"
	I1018 12:16:41.755600  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.756081  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.758318  836859 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:16:41.758342  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:16:41.758515  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.780471  836859 out.go:179] * Verifying Kubernetes components...
	I1018 12:16:41.783603  836859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:16:41.783980  836859 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:16:41.787942  836859 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:16:41.788231  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.790016  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:16:41.790037  836859 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:16:41.790125  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.811017  836859 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:16:41.830127  836859 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:16:41.836037  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:16:41.836230  836859 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:16:41.836236  836859 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:16:41.837610  836859 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:16:41.857430  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:16:41.857512  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.867412  836859 addons.go:238] Setting addon default-storageclass=true in "addons-206214"
	I1018 12:16:41.867454  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.872186  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.888858  836859 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:16:41.888880  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:16:41.888942  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.895535  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:41.896830  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:16:41.896865  836859 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:16:41.896986  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.932120  836859 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:16:41.935076  836859 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:16:41.935102  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:16:41.935172  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:41.938191  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:16:41.944306  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:16:41.950810  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:16:41.955071  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:16:41.956551  836859 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:16:41.985832  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:16:41.989397  836859 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-206214"
	I1018 12:16:41.989441  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:41.989839  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:41.999167  836859 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:16:42.005037  836859 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:16:42.005061  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:16:42.005138  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	W1018 12:16:42.010733  836859 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 12:16:42.011318  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.015109  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.017001  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:16:42.018157  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:16:42.018394  836859 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:16:42.044433  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:16:42.047526  836859 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:16:42.047551  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:16:42.047619  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.056096  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:16:42.062640  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:16:42.062729  836859 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:16:42.062845  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.078273  836859 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:16:42.078298  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:16:42.078368  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.103512  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:16:42.108320  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:42.112958  836859 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:16:42.115776  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:16:42.115808  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:16:42.115889  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.120471  836859 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:16:42.120495  836859 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:16:42.120570  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.135955  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.137027  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.137877  836859 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:16:42.137895  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:16:42.137984  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.167949  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.169606  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.247961  836859 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:16:42.250917  836859 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:16:42.253853  836859 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:16:42.253876  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:16:42.253951  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:42.261078  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.288727  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.289680  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.293085  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.319977  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.320020  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.320692  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	W1018 12:16:42.323930  836859 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:16:42.323968  836859 retry.go:31] will retry after 209.516068ms: ssh: handshake failed: EOF
	I1018 12:16:42.347291  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:42.484885  836859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:16:42.485179  836859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:16:42.732110  836859 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:42.732188  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:16:42.822657  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:16:43.001561  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:16:43.001653  836859 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:16:43.018453  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:43.021229  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:16:43.021304  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:16:43.136083  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:16:43.203538  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:16:43.203618  836859 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:16:43.226238  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:16:43.226314  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:16:43.226842  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:16:43.270726  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:16:43.274133  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:16:43.293991  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:16:43.313183  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:16:43.313260  836859 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:16:43.373168  836859 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:16:43.373251  836859 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:16:43.374283  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:16:43.374352  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:16:43.419239  836859 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:16:43.419321  836859 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:16:43.468322  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:16:43.468471  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:16:43.468503  836859 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:16:43.472311  836859 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:16:43.472390  836859 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:16:43.492181  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:16:43.504445  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:16:43.547487  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:16:43.547570  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:16:43.591322  836859 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:16:43.591403  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:16:43.649945  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:16:43.676539  836859 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:16:43.676618  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:16:43.695399  836859 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:16:43.695481  836859 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:16:43.801374  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:16:43.804892  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:16:43.804967  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:16:43.929143  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:16:43.949889  836859 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:16:43.949981  836859 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:16:44.108331  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:16:44.108413  836859 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:16:44.288955  836859 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:16:44.289032  836859 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:16:44.345598  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:16:44.345671  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:16:44.567033  836859 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:16:44.567054  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:16:44.644853  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:16:44.644876  836859 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:16:44.662069  836859 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.176839614s)
	I1018 12:16:44.662096  836859 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:16:44.663073  836859 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.178112579s)
	I1018 12:16:44.664002  836859 node_ready.go:35] waiting up to 6m0s for node "addons-206214" to be "Ready" ...
	I1018 12:16:44.664224  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.841540249s)
	I1018 12:16:44.809758  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:16:44.898048  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:16:44.898125  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:16:45.195103  836859 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-206214" context rescaled to 1 replicas
	I1018 12:16:45.252656  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:16:45.252745  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:16:45.431798  836859 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:16:45.431888  836859 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 12:16:45.579903  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1018 12:16:46.733897  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:47.534082  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.515537899s)
	W1018 12:16:47.534114  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:47.534134  836859 retry.go:31] will retry after 154.885209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:47.534187  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.398031205s)
	I1018 12:16:47.534228  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.307334672s)
	I1018 12:16:47.689389  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:48.642834  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.348758503s)
	I1018 12:16:48.642952  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.174566557s)
	I1018 12:16:48.643232  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.150972102s)
	I1018 12:16:48.643325  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.138805838s)
	I1018 12:16:48.643524  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.99349581s)
	I1018 12:16:48.643563  836859 addons.go:479] Verifying addon metrics-server=true in "addons-206214"
	I1018 12:16:48.643624  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.842175951s)
	I1018 12:16:48.643671  836859 addons.go:479] Verifying addon registry=true in "addons-206214"
	I1018 12:16:48.643897  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.373103996s)
	I1018 12:16:48.643977  836859 addons.go:479] Verifying addon ingress=true in "addons-206214"
	I1018 12:16:48.644081  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.714869978s)
	I1018 12:16:48.643939  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.369730972s)
	I1018 12:16:48.646926  836859 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-206214 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:16:48.647036  836859 out.go:179] * Verifying registry addon...
	I1018 12:16:48.647086  836859 out.go:179] * Verifying ingress addon...
	I1018 12:16:48.651851  836859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:16:48.652848  836859 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:16:48.663609  836859 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:16:48.663715  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:48.664366  836859 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:16:48.664420  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:48.677686  836859 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1018 12:16:48.694639  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.884788351s)
	W1018 12:16:48.694681  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:16:48.694703  836859 retry.go:31] will retry after 311.198419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:16:49.006836  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:16:49.087542  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.507525624s)
	I1018 12:16:49.087710  836859 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-206214"
	I1018 12:16:49.092826  836859 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:16:49.096565  836859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:16:49.101965  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.412496127s)
	W1018 12:16:49.102050  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:49.102087  836859 retry.go:31] will retry after 392.224234ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:49.118678  836859 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:16:49.118748  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:16:49.167236  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:49.219195  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:49.219534  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:49.495279  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:49.601157  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:49.655363  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:49.657998  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:49.718048  836859 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:16:49.718168  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:49.740929  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:49.873939  836859 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:16:49.893156  836859 addons.go:238] Setting addon gcp-auth=true in "addons-206214"
	I1018 12:16:49.893203  836859 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:16:49.893655  836859 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:16:49.913669  836859 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:16:49.913742  836859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:16:49.933620  836859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:16:50.101833  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:50.156884  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:50.157351  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:50.599814  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:50.654900  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:50.655624  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:51.099983  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:51.156227  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:51.156372  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:51.167591  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:51.601525  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:51.657186  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:51.657461  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:51.819923  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.324606785s)
	W1018 12:16:51.819964  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:51.819983  836859 retry.go:31] will retry after 704.903605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:51.820038  836859 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.906346878s)
	I1018 12:16:51.820206  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.812914467s)
	I1018 12:16:51.823195  836859 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:16:51.826151  836859 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:16:51.829091  836859 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:16:51.829125  836859 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:16:51.843217  836859 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:16:51.843297  836859 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:16:51.863007  836859 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:16:51.863038  836859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:16:51.880455  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:16:52.100794  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:52.157214  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:52.157658  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:52.404648  836859 addons.go:479] Verifying addon gcp-auth=true in "addons-206214"
	I1018 12:16:52.407882  836859 out.go:179] * Verifying gcp-auth addon...
	I1018 12:16:52.412804  836859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:16:52.421132  836859 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:16:52.421157  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:52.525545  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:52.602922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:52.657184  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:52.657772  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:52.916464  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:53.100785  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:53.157744  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:53.158178  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:53.168834  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	W1018 12:16:53.343845  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:53.343882  836859 retry.go:31] will retry after 960.020876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:53.415786  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:53.599729  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:53.655750  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:53.656042  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:53.916188  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:54.100185  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:54.155522  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:54.156826  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:54.304838  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:54.416281  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:54.600857  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:54.656940  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:54.657418  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:54.916567  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:55.100704  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:16:55.123902  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:55.123934  836859 retry.go:31] will retry after 1.824477957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:55.156037  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:55.156369  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:55.416465  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:55.600265  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:55.654967  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:55.656209  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:55.668726  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:55.916077  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:56.099937  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:56.155007  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:56.156304  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:56.415775  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:56.600046  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:56.655290  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:56.656356  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:56.915754  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:56.948803  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:16:57.100807  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:57.155851  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:57.157859  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:57.416608  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:57.601155  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:57.656844  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:57.658160  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:57.761048  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:57.761081  836859 retry.go:31] will retry after 2.20875503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:16:57.916316  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:58.100492  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:58.155843  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:58.157464  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:16:58.167580  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:16:58.416784  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:58.599721  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:58.655765  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:58.656008  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:58.916788  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:59.099513  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:59.155752  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:59.156168  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:59.417000  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:59.599632  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:16:59.655894  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:16:59.656072  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:16:59.917019  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:16:59.970334  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:00.101371  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:00.164886  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:00.165651  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:00.171158  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:00.418491  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:00.601608  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:00.657424  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:00.658076  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:00.916566  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:01.046747  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.076365369s)
	W1018 12:17:01.046785  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:01.046805  836859 retry.go:31] will retry after 3.668859693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:01.100249  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:01.157095  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:01.157212  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:01.416612  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:01.599895  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:01.656777  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:01.658044  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:01.917903  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:02.100232  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:02.154949  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:02.156835  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:02.416242  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:02.600557  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:02.656591  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:02.656895  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:02.667070  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:02.916376  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:03.100970  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:03.156575  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:03.156863  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:03.416093  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:03.601341  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:03.701671  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:03.702720  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:03.917501  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:04.100924  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:04.155229  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:04.155741  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:04.416101  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:04.600528  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:04.655480  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:04.656557  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:04.667724  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:04.715866  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:04.916579  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:05.099810  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:05.157767  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:05.158649  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:05.416268  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:17:05.600720  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:05.600814  836859 retry.go:31] will retry after 5.24493786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:05.605972  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:05.657115  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:05.658716  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:05.917033  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:06.100548  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:06.155624  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:06.157075  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:06.416664  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:06.600139  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:06.656587  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:06.656772  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:06.667830  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:06.916313  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:07.100201  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:07.155991  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:07.156081  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:07.416927  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:07.600481  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:07.655694  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:07.656780  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:07.917175  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:08.100504  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:08.156768  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:08.157019  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:08.415729  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:08.599697  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:08.656121  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:08.656320  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:08.916216  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:09.100591  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:09.155407  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:09.156061  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:09.167855  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:09.416064  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:09.600071  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:09.655982  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:09.656039  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:09.916944  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:10.099996  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:10.154785  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:10.155997  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:10.415811  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:10.599723  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:10.655566  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:10.656065  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:10.846462  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:10.916789  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:11.100593  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:11.157635  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:11.157978  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:11.428365  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:11.600090  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:11.659392  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:11.663260  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:11.668212  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	W1018 12:17:11.704373  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:11.704439  836859 retry.go:31] will retry after 3.739788043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:11.916752  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:12.100437  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:12.155866  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:12.156589  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:12.415672  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:12.599930  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:12.656492  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:12.656934  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:12.916411  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:13.100323  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:13.155424  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:13.156297  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:13.416527  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:13.600746  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:13.654664  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:13.655784  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:13.916610  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:14.100557  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:14.155248  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:14.156485  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:14.167008  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:14.416483  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:14.600416  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:14.655001  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:14.656655  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:14.916171  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:15.100450  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:15.155459  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:15.156267  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:15.416376  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:15.445439  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:15.599986  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:15.657325  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:15.658000  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:15.916976  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:16.099935  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:16.156060  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:16.156174  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:16.167174  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	W1018 12:17:16.265258  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:16.265289  836859 retry.go:31] will retry after 14.389338895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:16.416417  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:16.599396  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:16.655949  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:16.656042  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:16.917084  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:17.100223  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:17.155465  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:17.156641  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:17.416049  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:17.600487  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:17.655406  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:17.655963  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:17.917469  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:18.100515  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:18.155434  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:18.156784  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:18.167782  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:18.415771  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:18.600082  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:18.655900  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:18.656238  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:18.915868  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:19.100014  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:19.154830  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:19.156344  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:19.415910  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:19.600125  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:19.655069  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:19.656711  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:19.916717  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:20.100060  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:20.154726  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:20.155873  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:20.167825  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:20.415886  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:20.599840  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:20.654630  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:20.655769  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:20.916389  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:21.100989  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:21.154947  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:21.155774  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:21.415924  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:21.599822  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:21.654699  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:21.656014  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:21.916377  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:22.100669  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:22.156257  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:22.156451  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:22.417034  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:22.599875  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:22.655114  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:22.656029  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 12:17:22.666868  836859 node_ready.go:57] node "addons-206214" has "Ready":"False" status (will retry)
	I1018 12:17:22.916318  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:23.100427  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:23.155596  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:23.156388  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:23.416573  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:23.599749  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:23.655761  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:23.655912  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:23.920032  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:24.120192  836859 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:17:24.120273  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:24.228046  836859 node_ready.go:49] node "addons-206214" is "Ready"
	I1018 12:17:24.228126  836859 node_ready.go:38] duration metric: took 39.564084165s for node "addons-206214" to be "Ready" ...
	I1018 12:17:24.228154  836859 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:17:24.228239  836859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:17:24.252961  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:24.253506  836859 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:17:24.253554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:24.255982  836859 api_server.go:72] duration metric: took 42.624761246s to wait for apiserver process to appear ...
	I1018 12:17:24.256041  836859 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:17:24.256075  836859 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:17:24.275735  836859 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:17:24.278615  836859 api_server.go:141] control plane version: v1.34.1
	I1018 12:17:24.278638  836859 api_server.go:131] duration metric: took 22.576602ms to wait for apiserver health ...
	I1018 12:17:24.278647  836859 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:17:24.288066  836859 system_pods.go:59] 19 kube-system pods found
	I1018 12:17:24.288151  836859 system_pods.go:61] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.288177  836859 system_pods.go:61] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.288216  836859 system_pods.go:61] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending
	I1018 12:17:24.288242  836859 system_pods.go:61] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending
	I1018 12:17:24.288262  836859 system_pods.go:61] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.288283  836859 system_pods.go:61] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.288304  836859 system_pods.go:61] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.288339  836859 system_pods.go:61] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.288362  836859 system_pods.go:61] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending
	I1018 12:17:24.288383  836859 system_pods.go:61] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.288414  836859 system_pods.go:61] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.288440  836859 system_pods.go:61] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending
	I1018 12:17:24.288459  836859 system_pods.go:61] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending
	I1018 12:17:24.288482  836859 system_pods.go:61] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.288517  836859 system_pods.go:61] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.288540  836859 system_pods.go:61] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending
	I1018 12:17:24.288558  836859 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending
	I1018 12:17:24.288578  836859 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending
	I1018 12:17:24.288602  836859 system_pods.go:61] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.288631  836859 system_pods.go:74] duration metric: took 9.977156ms to wait for pod list to return data ...
	I1018 12:17:24.288658  836859 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:17:24.296395  836859 default_sa.go:45] found service account: "default"
	I1018 12:17:24.296466  836859 default_sa.go:55] duration metric: took 7.786292ms for default service account to be created ...
	I1018 12:17:24.296490  836859 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:17:24.317761  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:24.317843  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.317871  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.317914  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending
	I1018 12:17:24.317942  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending
	I1018 12:17:24.317962  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.317983  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.318016  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.318039  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.318057  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending
	I1018 12:17:24.318076  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.318097  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.318127  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending
	I1018 12:17:24.318155  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:24.318179  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.318203  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.318235  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending
	I1018 12:17:24.318262  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending
	I1018 12:17:24.318283  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending
	I1018 12:17:24.318306  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.318352  836859 retry.go:31] will retry after 268.188257ms: missing components: kube-dns
	I1018 12:17:24.428441  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:24.592755  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:24.592852  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.592878  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.592919  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:17:24.592954  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:17:24.592975  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.592997  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.593027  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.593049  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.593069  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:17:24.593089  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.593109  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.593138  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:17:24.593164  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:24.593185  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.593207  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.593237  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:17:24.593261  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.593284  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.593306  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.593348  836859 retry.go:31] will retry after 318.991686ms: missing components: kube-dns
	I1018 12:17:24.691702  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:24.691937  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:24.692687  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:24.920904  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:24.924994  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:24.925074  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:17:24.925101  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:24.925142  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:17:24.925167  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:17:24.925186  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:24.925208  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:24.925240  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:24.925263  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:24.925287  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:17:24.925307  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:24.925342  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:24.925367  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:17:24.925388  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:24.925410  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:24.925443  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:24.925468  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:17:24.925490  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.925514  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:24.925549  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:17:24.925581  836859 retry.go:31] will retry after 401.03519ms: missing components: kube-dns
	I1018 12:17:25.106077  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:25.202888  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:25.203462  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:25.340827  836859 system_pods.go:86] 19 kube-system pods found
	I1018 12:17:25.348947  836859 system_pods.go:89] "coredns-66bc5c9577-nnvks" [f01ee9c2-fdb2-4b54-aa0c-c1e650ed8354] Running
	I1018 12:17:25.349020  836859 system_pods.go:89] "csi-hostpath-attacher-0" [2b235a9b-14e3-430e-8678-3255e6cfcc32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:17:25.349046  836859 system_pods.go:89] "csi-hostpath-resizer-0" [e3a1dc45-dc08-4fe8-88a2-4d40523893b8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:17:25.349071  836859 system_pods.go:89] "csi-hostpathplugin-sx7b6" [5104d3ae-f7ee-42db-876e-1e66d0941f76] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:17:25.349113  836859 system_pods.go:89] "etcd-addons-206214" [f5e5596c-df2a-4c9f-bf73-e586118ad091] Running
	I1018 12:17:25.349134  836859 system_pods.go:89] "kindnet-l2ffr" [51c0b7d9-c7cd-4e1b-91da-beb683a41da0] Running
	I1018 12:17:25.349155  836859 system_pods.go:89] "kube-apiserver-addons-206214" [42f96707-c2bd-4c6f-88a8-236898538890] Running
	I1018 12:17:25.349187  836859 system_pods.go:89] "kube-controller-manager-addons-206214" [70d66110-cf2f-44b3-b4d5-3326fec6165f] Running
	I1018 12:17:25.349212  836859 system_pods.go:89] "kube-ingress-dns-minikube" [0a223f3c-78b0-407a-8024-1ae49f5f1487] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:17:25.349230  836859 system_pods.go:89] "kube-proxy-hlgtx" [cff8b82e-0d57-4a92-9d9c-c182df55fb98] Running
	I1018 12:17:25.349251  836859 system_pods.go:89] "kube-scheduler-addons-206214" [d8f4459a-edeb-48bc-b73a-c603e0662c80] Running
	I1018 12:17:25.349286  836859 system_pods.go:89] "metrics-server-85b7d694d7-lxg99" [e661aaf3-1361-4773-804b-550f68a5f474] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:17:25.349312  836859 system_pods.go:89] "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:17:25.349335  836859 system_pods.go:89] "registry-6b586f9694-mvmwh" [aca7322d-2a94-4ea2-bee5-db8ac1c272a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:17:25.349359  836859 system_pods.go:89] "registry-creds-764b6fb674-46n6w" [42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:17:25.349390  836859 system_pods.go:89] "registry-proxy-cxqbx" [c07e3d50-e9df-4d88-8956-f11f7df97ee2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:17:25.349419  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fp5gt" [fa882f8d-f143-493a-aa12-f749e6a5e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:25.349443  836859 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sc8l2" [83c9a8c7-6fde-4b22-8ffb-a4d5dba9582c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:17:25.349465  836859 system_pods.go:89] "storage-provisioner" [a6a9a92b-48c3-4091-9f0f-79b2b92f8d7d] Running
	I1018 12:17:25.349504  836859 system_pods.go:126] duration metric: took 1.052993447s to wait for k8s-apps to be running ...
	I1018 12:17:25.349530  836859 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:17:25.349624  836859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:17:25.368544  836859 system_svc.go:56] duration metric: took 19.004594ms WaitForService to wait for kubelet
	I1018 12:17:25.368616  836859 kubeadm.go:586] duration metric: took 43.737397207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:25.368655  836859 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:17:25.377452  836859 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:17:25.377537  836859 node_conditions.go:123] node cpu capacity is 2
	I1018 12:17:25.377563  836859 node_conditions.go:105] duration metric: took 8.886824ms to run NodePressure ...
	I1018 12:17:25.377589  836859 start.go:241] waiting for startup goroutines ...
	I1018 12:17:25.416512  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:25.600809  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:25.656390  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:25.656757  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:25.916200  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:26.100574  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:26.156793  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:26.157013  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:26.416611  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:26.602808  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:26.658391  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:26.659128  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:26.921324  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:27.104044  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:27.160966  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:27.161286  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:27.418508  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:27.602496  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:27.660667  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:27.660912  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:27.916666  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:28.100806  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:28.157616  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:28.157866  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:28.415720  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:28.601415  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:28.665711  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:28.666386  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:28.916907  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:29.101232  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:29.162427  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:29.162805  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:29.417793  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:29.605204  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:29.662306  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:29.662805  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:29.916754  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:30.104450  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:30.162951  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:30.163472  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:30.417869  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:30.602395  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:30.655423  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:30.660880  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:30.661641  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:30.916077  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:31.101322  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:31.157703  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:31.158034  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:31.416143  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:31.600054  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:31.657391  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:31.657748  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:31.861871  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.206311386s)
	W1018 12:17:31.861903  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:31.861923  836859 retry.go:31] will retry after 9.225962136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:31.916436  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:32.099690  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:32.156856  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:32.157783  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:32.416180  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:32.600941  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:32.656594  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:32.656974  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:32.916636  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:33.100971  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:33.157333  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:33.159200  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:33.416572  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:33.600633  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:33.662588  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:33.663039  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:33.916594  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:34.100622  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:34.156333  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:34.156447  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:34.416588  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:34.600953  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:34.655330  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:34.656082  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:34.916655  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:35.100926  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:35.157164  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:35.158699  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:35.416922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:35.600886  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:35.658030  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:35.658626  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:35.917158  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:36.101068  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:36.157402  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:36.157833  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:36.416153  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:36.601229  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:36.658007  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:36.658590  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:36.917285  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:37.101680  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:37.155845  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:37.156760  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:37.417228  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:37.600905  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:37.655021  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:37.656376  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:37.917711  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:38.100499  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:38.158139  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:38.160153  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:38.418178  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:38.600476  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:38.657488  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:38.658361  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:38.916361  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:39.110782  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:39.158003  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:39.158643  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:39.417582  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:39.601630  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:39.659424  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:39.660353  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:39.917446  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:40.100613  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:40.156026  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:40.156160  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:40.416818  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:40.600790  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:40.655700  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:40.657053  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:40.918564  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:41.088872  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:17:41.101804  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:41.158372  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:41.158849  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:41.416626  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:41.603343  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:41.660301  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:41.660803  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:41.917411  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:42.103527  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:42.149451  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.060480249s)
	W1018 12:17:42.149507  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:42.149531  836859 retry.go:31] will retry after 22.313412643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:17:42.157551  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:42.158071  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:42.416558  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:42.600294  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:42.655223  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:42.657221  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:42.916995  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:43.100405  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:43.157259  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:43.157795  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:43.416261  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:43.601149  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:43.656287  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:43.656613  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:43.917129  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:44.101364  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:44.158416  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:44.158781  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:44.416022  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:44.600921  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:44.656168  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:44.657091  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:44.916509  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:45.112469  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:45.161811  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:45.164209  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:45.418479  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:45.599630  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:45.656795  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:45.656951  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:45.916559  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:46.100309  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:46.156086  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:46.157698  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:46.416542  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:46.599846  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:46.655122  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:46.657675  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:46.916778  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:47.101285  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:47.156138  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:47.157960  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:47.416612  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:47.600165  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:47.656271  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:47.658395  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:47.917017  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:48.100802  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:48.156641  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:48.157089  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:48.417951  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:48.601037  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:48.655143  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:48.656260  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:48.917543  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:49.099876  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:49.156529  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:49.157015  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:49.418379  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:49.600957  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:49.655191  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:49.656318  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:49.916904  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:50.100909  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:50.157045  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:50.157403  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:50.417473  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:50.601132  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:50.657290  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:50.657486  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:50.916879  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:51.104873  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:51.202409  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:51.202590  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:51.416748  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:51.601375  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:51.655677  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:51.656329  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:51.917986  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:52.100404  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:52.155265  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:52.158219  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:52.423064  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:52.601363  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:52.657957  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:52.659346  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:52.917122  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:53.100695  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:53.158615  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:53.158724  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:53.416829  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:53.600637  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:53.656713  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:53.657248  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:53.916227  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:54.100616  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:54.157539  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:54.160203  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:54.416426  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:54.602016  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:54.657136  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:54.657517  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:54.918292  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:55.101360  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:55.157501  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:55.157962  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:55.438732  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:55.600405  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:55.655554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:55.656593  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:55.917470  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:56.101432  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:56.155309  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:56.156907  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:56.416225  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:56.601172  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:56.655922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:56.657171  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:56.916169  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:57.101292  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:57.157823  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:57.158222  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:57.417475  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:57.602420  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:57.660812  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:57.660979  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:57.916459  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:58.100611  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:58.156866  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:58.157898  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:58.416613  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:58.600402  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:58.655931  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:58.658259  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:58.916548  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:59.100583  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:59.157368  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:59.157786  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:59.417740  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:17:59.600804  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:17:59.658402  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:17:59.658819  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:17:59.916667  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:00.128940  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:00.169770  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:00.170287  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:00.417415  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:00.600981  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:00.654922  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:00.657547  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:00.916411  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:01.100759  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:01.157275  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:01.157730  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:01.417278  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:01.600306  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:01.656870  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:01.657069  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:01.916017  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:02.100625  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:02.156498  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:02.157792  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:02.417005  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:02.600282  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:02.656670  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:02.656868  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:02.916744  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:03.100521  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:03.156601  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:03.157752  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:03.417433  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:03.600745  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:03.657337  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:03.657520  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:03.916633  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:04.099965  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:04.156728  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:04.156848  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:04.416591  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:04.463721  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:18:04.600614  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:04.656282  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:04.656824  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:04.915976  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:05.101430  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:05.157638  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:05.157743  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:05.420072  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:05.600560  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:05.601403  836859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.137640602s)
	W1018 12:18:05.601436  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:18:05.601454  836859 retry.go:31] will retry after 33.384168177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:18:05.657350  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:05.657653  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:05.917128  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:06.100482  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:06.156938  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:06.157086  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:06.416599  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:06.600243  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:06.657657  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:06.657779  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:06.916301  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:07.101431  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:07.156926  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:07.157536  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:07.417351  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:07.601508  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:07.656764  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:07.657006  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:07.916001  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:08.100356  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:08.156572  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:08.157238  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:08.416654  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:08.600398  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:08.656438  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:08.656759  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:08.916705  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:09.105202  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:09.163711  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:09.163869  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:09.416919  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:09.601190  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:09.658221  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:09.658595  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:09.917179  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:10.100860  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:10.162037  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:10.162788  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:10.416360  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:10.600249  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:10.655268  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:10.656670  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:10.915900  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:11.099880  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:11.156225  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:11.157059  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:11.416431  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:11.601265  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:11.656716  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:11.657134  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:11.916422  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:12.100556  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:12.157179  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:12.157585  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:12.416178  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:12.600525  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:12.656613  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:12.657065  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:12.916570  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:13.100485  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:13.156180  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:13.156489  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:13.417275  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:13.601537  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:13.655906  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:13.656014  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:13.915955  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:14.099924  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:14.156938  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:14.157292  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:18:14.416551  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:14.600693  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:14.656636  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:14.657570  836859 kapi.go:107] duration metric: took 1m26.005719538s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:18:14.916759  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:15.100305  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:15.158985  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:15.416733  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:15.601172  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:15.656494  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:15.917662  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:16.101040  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:16.156388  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:16.416641  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:16.600127  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:16.657008  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:16.916650  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:17.100765  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:17.156078  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:17.416574  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:17.601098  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:17.656251  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:17.917008  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:18.101147  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:18.156204  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:18.416838  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:18.601026  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:18.659606  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:18.916507  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:19.101756  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:19.157316  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:19.417724  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:19.601133  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:19.657850  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:19.917995  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:20.103374  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:20.156927  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:20.418733  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:20.600766  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:20.657394  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:20.930554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:21.101605  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:21.157475  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:21.420884  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:21.600876  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:21.656549  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:21.918453  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:22.104009  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:22.157276  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:22.416690  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:22.602841  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:22.656074  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:22.916819  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:23.099870  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:23.155867  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:23.416625  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:23.600533  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:23.656454  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:23.917015  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:24.100555  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:24.158232  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:24.416440  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:24.601475  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:24.656912  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:24.916140  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:25.101234  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:25.156657  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:25.420006  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:25.600791  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:25.660299  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:25.916696  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:26.100219  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:26.156413  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:26.416604  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:26.601220  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:26.657280  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:26.916889  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:27.101279  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:27.156731  836859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:18:27.416264  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:27.607728  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:27.705438  836859 kapi.go:107] duration metric: took 1m39.052597178s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:18:27.916901  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:28.100487  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:28.416495  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:28.600754  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:28.915880  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:29.106130  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:29.416528  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:29.601354  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:29.916725  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:30.106247  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:30.416523  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:30.600874  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:30.917948  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:31.100835  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:31.416179  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:31.603432  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:31.917516  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:32.101375  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:32.416665  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:32.599718  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:32.916004  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:33.109123  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:33.417067  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:18:33.601079  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:33.916716  836859 kapi.go:107] duration metric: took 1m41.503917442s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:18:33.919683  836859 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-206214 cluster.
	I1018 12:18:33.922571  836859 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:18:33.924992  836859 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:18:34.100554  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:34.603985  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:35.100727  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:35.600996  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:36.101004  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:36.601123  836859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:18:37.100154  836859 kapi.go:107] duration metric: took 1m48.003580896s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:18:38.985886  836859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:18:39.874975  836859 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:18:39.875076  836859 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:18:39.878064  836859 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1018 12:18:39.880950  836859 addons.go:514] duration metric: took 1m58.249330303s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1018 12:18:39.881017  836859 start.go:246] waiting for cluster config update ...
	I1018 12:18:39.881040  836859 start.go:255] writing updated cluster config ...
	I1018 12:18:39.881372  836859 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:39.885425  836859 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:39.889573  836859 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnvks" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.894550  836859 pod_ready.go:94] pod "coredns-66bc5c9577-nnvks" is "Ready"
	I1018 12:18:39.894582  836859 pod_ready.go:86] duration metric: took 4.979486ms for pod "coredns-66bc5c9577-nnvks" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.897162  836859 pod_ready.go:83] waiting for pod "etcd-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.902321  836859 pod_ready.go:94] pod "etcd-addons-206214" is "Ready"
	I1018 12:18:39.902347  836859 pod_ready.go:86] duration metric: took 5.15581ms for pod "etcd-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.904846  836859 pod_ready.go:83] waiting for pod "kube-apiserver-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.910089  836859 pod_ready.go:94] pod "kube-apiserver-addons-206214" is "Ready"
	I1018 12:18:39.910118  836859 pod_ready.go:86] duration metric: took 5.243163ms for pod "kube-apiserver-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:39.912630  836859 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:40.290098  836859 pod_ready.go:94] pod "kube-controller-manager-addons-206214" is "Ready"
	I1018 12:18:40.290131  836859 pod_ready.go:86] duration metric: took 377.472411ms for pod "kube-controller-manager-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:40.489057  836859 pod_ready.go:83] waiting for pod "kube-proxy-hlgtx" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:40.889332  836859 pod_ready.go:94] pod "kube-proxy-hlgtx" is "Ready"
	I1018 12:18:40.889363  836859 pod_ready.go:86] duration metric: took 400.277147ms for pod "kube-proxy-hlgtx" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:41.089627  836859 pod_ready.go:83] waiting for pod "kube-scheduler-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:41.491174  836859 pod_ready.go:94] pod "kube-scheduler-addons-206214" is "Ready"
	I1018 12:18:41.491208  836859 pod_ready.go:86] duration metric: took 401.551929ms for pod "kube-scheduler-addons-206214" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:41.491232  836859 pod_ready.go:40] duration metric: took 1.60577079s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:41.554727  836859 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:18:41.558191  836859 out.go:179] * Done! kubectl is now configured to use "addons-206214" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.435429394Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621 for CNI network kindnet (type=ptp)"
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.4391358Z" level=info msg="Ran pod sandbox 92b66e611bc1f20e7f106f3e0f96a652ca6d6dee75af399ca6ada3a6b16b5469 with infra container: local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621/POD" id=0116d7c1-7879-4db6-a727-725e60fb713a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.444611325Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=ebff1d0c-c073-49b5-914b-2aa4bae93ca0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.448818087Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=1d3e5cb6-3070-4e6b-abc0-0daef9e167a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.458307895Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621/helper-pod" id=18e8c0dd-25ff-4fa2-97da-a5bce037f588 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.459630484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.467400913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.468366146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.49307533Z" level=info msg="Created container 173e05175f6823d3ea133b2b97fb6df24898daf5ae1ae628d2b9792236f89bd8: local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621/helper-pod" id=18e8c0dd-25ff-4fa2-97da-a5bce037f588 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.496474489Z" level=info msg="Starting container: 173e05175f6823d3ea133b2b97fb6df24898daf5ae1ae628d2b9792236f89bd8" id=dd20f482-faf6-4033-87dc-593b392fce0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:20 addons-206214 crio[832]: time="2025-10-18T12:19:20.509149193Z" level=info msg="Started container" PID=5583 containerID=173e05175f6823d3ea133b2b97fb6df24898daf5ae1ae628d2b9792236f89bd8 description=local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621/helper-pod id=dd20f482-faf6-4033-87dc-593b392fce0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=92b66e611bc1f20e7f106f3e0f96a652ca6d6dee75af399ca6ada3a6b16b5469
	Oct 18 12:19:21 addons-206214 crio[832]: time="2025-10-18T12:19:21.867681832Z" level=info msg="Stopping pod sandbox: 92b66e611bc1f20e7f106f3e0f96a652ca6d6dee75af399ca6ada3a6b16b5469" id=8e7de5cb-d0fd-4b9e-ba2d-bd37951a3f46 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 12:19:21 addons-206214 crio[832]: time="2025-10-18T12:19:21.867938Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621 Namespace:local-path-storage ID:92b66e611bc1f20e7f106f3e0f96a652ca6d6dee75af399ca6ada3a6b16b5469 UID:c46bbd1a-f3fd-401e-825c-078350e05aa8 NetNS:/var/run/netns/535d6682-b79e-4f81-b1fe-fbb1b9f98dd4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012637e8}] Aliases:map[]}"
	Oct 18 12:19:21 addons-206214 crio[832]: time="2025-10-18T12:19:21.868073723Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621 from CNI network \"kindnet\" (type=ptp)"
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.396608416Z" level=info msg="Stopped pod sandbox: 92b66e611bc1f20e7f106f3e0f96a652ca6d6dee75af399ca6ada3a6b16b5469" id=8e7de5cb-d0fd-4b9e-ba2d-bd37951a3f46 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.733164421Z" level=info msg="Pulled image: docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a" id=2b6946da-ed5b-40a1-989a-e8fac3ce5976 name=/runtime.v1.ImageService/PullImage
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.735166036Z" level=info msg="Checking image status: docker.io/nginx:latest" id=cc42d09c-6eef-4ae2-beb6-4f7b6a254168 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.737621623Z" level=info msg="Checking image status: docker.io/nginx" id=ca323dfa-ae32-49f4-9ce3-beb2a90f5671 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.744323112Z" level=info msg="Creating container: default/task-pv-pod/task-pv-container" id=2ad78ca1-234b-4a0a-8664-87e04ec7377c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.745847157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.754084415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.75466744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.771003928Z" level=info msg="Created container ddaa7a09ca2825cb39206aff0ec071cdd296ddffeacf152bb7a160eab4e78417: default/task-pv-pod/task-pv-container" id=2ad78ca1-234b-4a0a-8664-87e04ec7377c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.776675804Z" level=info msg="Starting container: ddaa7a09ca2825cb39206aff0ec071cdd296ddffeacf152bb7a160eab4e78417" id=9f8d9e27-b921-49e8-bbb4-295ae5de6fb5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:19:23 addons-206214 crio[832]: time="2025-10-18T12:19:23.779250516Z" level=info msg="Started container" PID=5697 containerID=ddaa7a09ca2825cb39206aff0ec071cdd296ddffeacf152bb7a160eab4e78417 description=default/task-pv-pod/task-pv-container id=9f8d9e27-b921-49e8-bbb4-295ae5de6fb5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d52540404329a5bb0911e647596c3f1feffe471fd6c07e40fffc56ce4d13825d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	ddaa7a09ca282       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a                                              4 seconds ago        Running             task-pv-container                        0                   d52540404329a       task-pv-pod                                                  default
	173e05175f682       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             7 seconds ago        Exited              helper-pod                               0                   92b66e611bc1f       helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621   local-path-storage
	6268b344d5c86       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            11 seconds ago       Exited              busybox                                  0                   8af948ecefa78       test-local-path                                              default
	0a6f4636527ad       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            15 seconds ago       Exited              helper-pod                               0                   163e0aed83e21       helper-pod-create-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621   local-path-storage
	326d2d6b33650       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          24 seconds ago       Exited              registry-test                            0                   abab2b3c6ff94       registry-test                                                default
	faa7970b253d6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          43 seconds ago       Running             busybox                                  0                   6ce195f4267a8       busybox                                                      default
	5b76cd93740ab       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          52 seconds ago       Running             csi-snapshotter                          0                   cf8219069e293       csi-hostpathplugin-sx7b6                                     kube-system
	98f3833f9be11       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          53 seconds ago       Running             csi-provisioner                          0                   cf8219069e293       csi-hostpathplugin-sx7b6                                     kube-system
	c3e4ce21efe38       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 55 seconds ago       Running             gcp-auth                                 0                   83f312298a7ea       gcp-auth-78565c9fb4-rc4zx                                    gcp-auth
	cdf6dc6b59791       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             57 seconds ago       Exited              patch                                    2                   3283c1b84ae05       gcp-auth-certs-patch-78w8b                                   gcp-auth
	45adaaa4d7905       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            58 seconds ago       Running             liveness-probe                           0                   cf8219069e293       csi-hostpathplugin-sx7b6                                     kube-system
	f5ac90f527a67       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           59 seconds ago       Running             hostpath                                 0                   cf8219069e293       csi-hostpathplugin-sx7b6                                     kube-system
	5dc40e4564be4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   cf8219069e293       csi-hostpathplugin-sx7b6                                     kube-system
	a121a292ddfcf       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             About a minute ago   Running             controller                               0                   fccc6252764c4       ingress-nginx-controller-675c5ddd98-jkzpm                    ingress-nginx
	29a93fedb418d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            About a minute ago   Running             gadget                                   0                   d9a70a7a0cd7a       gadget-798dm                                                 gadget
	e96b31016855e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   69d520820ab62       gcp-auth-certs-create-clnhn                                  gcp-auth
	a32692f08d633       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   cf8219069e293       csi-hostpathplugin-sx7b6                                     kube-system
	16bf9cff88592       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   26266d929a6f5       registry-proxy-cxqbx                                         kube-system
	296399ec57fb6       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   e18fa2b26589e       nvidia-device-plugin-daemonset-k8hvk                         kube-system
	6b1084a290aa4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   5ba43cb93128d       local-path-provisioner-648f6765c9-n22lq                      local-path-storage
	f4e6a924c7832       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   a2bf968d722ca       yakd-dashboard-5ff678cb9-8zhf4                               yakd-dashboard
	6ce61cd446801       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   dddad451e3153       csi-hostpath-resizer-0                                       kube-system
	514d718d40ef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   0c9c66018b01d       snapshot-controller-7d9fbc56b8-sc8l2                         kube-system
	a52b4e8e9dff8       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              patch                                    1                   be10c9c70c060       ingress-nginx-admission-patch-qtz2v                          ingress-nginx
	d130ef4648a79       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   c314f96a1f14b       ingress-nginx-admission-create-v7rd7                         ingress-nginx
	afeb96d141fb8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   0669a4b5f4464       cloud-spanner-emulator-86bd5cbb97-xt4gl                      default
	1f1880b904fc1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   93010ddc3d3d7       snapshot-controller-7d9fbc56b8-fp5gt                         kube-system
	119f93a0bf370       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   c97eafb09d813       registry-6b586f9694-mvmwh                                    kube-system
	640a2e84493b8       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   2a92c4c6f5730       kube-ingress-dns-minikube                                    kube-system
	b417690dc2872       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   5106b0becca76       csi-hostpath-attacher-0                                      kube-system
	bc47b235de19a       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        2 minutes ago        Running             metrics-server                           0                   96e3fe32f5f56       metrics-server-85b7d694d7-lxg99                              kube-system
	0647083a60005       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             2 minutes ago        Running             storage-provisioner                      0                   74a902c5b35bb       storage-provisioner                                          kube-system
	7f3683b181a0b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             2 minutes ago        Running             coredns                                  0                   9df4efdd993cb       coredns-66bc5c9577-nnvks                                     kube-system
	0cb48535119c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   d8413adfca4ec       kindnet-l2ffr                                                kube-system
	58409db23c34e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   bd119c0250b10       kube-proxy-hlgtx                                             kube-system
	6db03b7b7dbcb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   2a4c8bd604166       kube-scheduler-addons-206214                                 kube-system
	4db50608b742d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   cc962ef98b3ed       kube-controller-manager-addons-206214                        kube-system
	cf0330eac63a5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   6cf2de3a09eb9       kube-apiserver-addons-206214                                 kube-system
	e5013ec0caf4e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   89885d36f8dd4       etcd-addons-206214                                           kube-system
	
	
	==> coredns [7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15] <==
	[INFO] 10.244.0.12:53934 - 42406 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00203406s
	[INFO] 10.244.0.12:53934 - 11560 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000110197s
	[INFO] 10.244.0.12:53934 - 34049 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000276747s
	[INFO] 10.244.0.12:36728 - 8872 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000173828s
	[INFO] 10.244.0.12:36728 - 8683 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071542s
	[INFO] 10.244.0.12:43030 - 46603 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081683s
	[INFO] 10.244.0.12:43030 - 46398 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000209448s
	[INFO] 10.244.0.12:46958 - 36915 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122677s
	[INFO] 10.244.0.12:46958 - 36727 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101269s
	[INFO] 10.244.0.12:37987 - 36155 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001547677s
	[INFO] 10.244.0.12:37987 - 36352 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001652615s
	[INFO] 10.244.0.12:53807 - 45531 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114891s
	[INFO] 10.244.0.12:53807 - 45152 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085974s
	[INFO] 10.244.0.21:48355 - 12811 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000141738s
	[INFO] 10.244.0.21:48320 - 54814 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002510786s
	[INFO] 10.244.0.21:53079 - 61674 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164893s
	[INFO] 10.244.0.21:53967 - 7932 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094418s
	[INFO] 10.244.0.21:58233 - 5746 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131957s
	[INFO] 10.244.0.21:34356 - 53882 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009624s
	[INFO] 10.244.0.21:43007 - 27275 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002353179s
	[INFO] 10.244.0.21:53353 - 18185 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00556325s
	[INFO] 10.244.0.21:49126 - 11639 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002921393s
	[INFO] 10.244.0.21:35936 - 14901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003103755s
	[INFO] 10.244.0.23:38492 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021565s
	[INFO] 10.244.0.23:58537 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145833s
	
	
	==> describe nodes <==
	Name:               addons-206214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-206214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-206214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-206214
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-206214"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-206214
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:19:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:19:09 +0000   Sat, 18 Oct 2025 12:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:19:09 +0000   Sat, 18 Oct 2025 12:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:19:09 +0000   Sat, 18 Oct 2025 12:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:19:09 +0000   Sat, 18 Oct 2025 12:17:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-206214
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                48fd73e9-b11f-46d2-a783-76daabc219c5
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     cloud-spanner-emulator-86bd5cbb97-xt4gl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  gadget                      gadget-798dm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  gcp-auth                    gcp-auth-78565c9fb4-rc4zx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jkzpm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m40s
	  kube-system                 coredns-66bc5c9577-nnvks                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m46s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 csi-hostpathplugin-sx7b6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 etcd-addons-206214                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m52s
	  kube-system                 kindnet-l2ffr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m46s
	  kube-system                 kube-apiserver-addons-206214                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-controller-manager-addons-206214        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-proxy-hlgtx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  kube-system                 kube-scheduler-addons-206214                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 metrics-server-85b7d694d7-lxg99              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m42s
	  kube-system                 nvidia-device-plugin-daemonset-k8hvk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 registry-6b586f9694-mvmwh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 registry-creds-764b6fb674-46n6w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 registry-proxy-cxqbx                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-fp5gt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-sc8l2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  local-path-storage          local-path-provisioner-648f6765c9-n22lq      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8zhf4               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 2m44s            kube-proxy       
	  Normal   NodeHasSufficientMemory  3m (x8 over 3m)  kubelet          Node addons-206214 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m (x8 over 3m)  kubelet          Node addons-206214 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m (x8 over 3m)  kubelet          Node addons-206214 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m52s            kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m52s            kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m51s            kubelet          Node addons-206214 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m51s            kubelet          Node addons-206214 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s            kubelet          Node addons-206214 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m47s            node-controller  Node addons-206214 event: Registered Node addons-206214 in Controller
	  Normal   NodeReady                2m5s             kubelet          Node addons-206214 status is now: NodeReady
	
	
	==> dmesg <==
	[ +18.372160] overlayfs: idmapped layers are currently not supported
	[Oct18 10:49] overlayfs: idmapped layers are currently not supported
	[Oct18 10:50] overlayfs: idmapped layers are currently not supported
	[Oct18 10:51] overlayfs: idmapped layers are currently not supported
	[ +26.703285] overlayfs: idmapped layers are currently not supported
	[Oct18 10:52] overlayfs: idmapped layers are currently not supported
	[Oct18 10:53] overlayfs: idmapped layers are currently not supported
	[Oct18 10:54] overlayfs: idmapped layers are currently not supported
	[ +42.459395] overlayfs: idmapped layers are currently not supported
	[  +0.085900] overlayfs: idmapped layers are currently not supported
	[Oct18 10:56] overlayfs: idmapped layers are currently not supported
	[ +18.116656] overlayfs: idmapped layers are currently not supported
	[Oct18 10:58] overlayfs: idmapped layers are currently not supported
	[  +3.156194] overlayfs: idmapped layers are currently not supported
	[Oct18 11:00] overlayfs: idmapped layers are currently not supported
	[Oct18 11:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:22] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=00000000f5b34d7b
	[  +0.001120] FS-Cache: O-key=[10] '34323937363632323639'
	[  +0.000787] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=00000000204faf8b
	[  +0.001107] FS-Cache: N-key=[10] '34323937363632323639'
	[Oct18 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 12:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608] <==
	{"level":"warn","ts":"2025-10-18T12:16:31.536437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.552135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.565873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.600274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.623936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.650859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.713752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.731238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.744160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.810729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.816466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.848274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.867516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.901222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.937712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:31.990322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:32.024030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:32.048499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:32.257735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:49.419791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:49.435950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.397384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.419959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.473001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:17:11.491637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51136","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c3e4ce21efe3844f94e7dab5975609f7ccbfa7d1d7a51738baf08d96acf21a3d] <==
	2025/10/18 12:18:33 GCP Auth Webhook started!
	2025/10/18 12:18:42 Ready to marshal response ...
	2025/10/18 12:18:42 Ready to write response ...
	2025/10/18 12:18:42 Ready to marshal response ...
	2025/10/18 12:18:42 Ready to write response ...
	2025/10/18 12:18:42 Ready to marshal response ...
	2025/10/18 12:18:42 Ready to write response ...
	2025/10/18 12:19:01 Ready to marshal response ...
	2025/10/18 12:19:01 Ready to write response ...
	2025/10/18 12:19:11 Ready to marshal response ...
	2025/10/18 12:19:11 Ready to write response ...
	2025/10/18 12:19:12 Ready to marshal response ...
	2025/10/18 12:19:12 Ready to write response ...
	2025/10/18 12:19:17 Ready to marshal response ...
	2025/10/18 12:19:17 Ready to write response ...
	2025/10/18 12:19:20 Ready to marshal response ...
	2025/10/18 12:19:20 Ready to write response ...
	
	
	==> kernel <==
	 12:19:28 up  4:02,  0 user,  load average: 2.31, 3.14, 3.42
	Linux addons-206214 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416] <==
	I1018 12:17:23.423043       1 main.go:301] handling current node
	I1018 12:17:33.423724       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:33.423787       1 main.go:301] handling current node
	I1018 12:17:43.423025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:43.423054       1 main.go:301] handling current node
	I1018 12:17:53.422037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:53.422122       1 main.go:301] handling current node
	I1018 12:18:03.427753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:18:03.427782       1 main.go:301] handling current node
	I1018 12:18:13.423778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:18:13.423866       1 main.go:301] handling current node
	I1018 12:18:23.422967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:18:23.423006       1 main.go:301] handling current node
	I1018 12:18:33.422394       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:18:33.422442       1 main.go:301] handling current node
	I1018 12:18:43.422008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:18:43.422048       1 main.go:301] handling current node
	I1018 12:18:53.423033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:18:53.423085       1 main.go:301] handling current node
	I1018 12:19:03.423008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:19:03.423040       1 main.go:301] handling current node
	I1018 12:19:13.422995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:19:13.423029       1 main.go:301] handling current node
	I1018 12:19:23.423722       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:19:23.423757       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2] <==
	I1018 12:16:49.028361       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.48.140"}
	W1018 12:16:49.418434       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 12:16:49.434244       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 12:16:52.261507       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.100.203.81"}
	W1018 12:17:11.397173       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 12:17:11.416957       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 12:17:11.462094       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 12:17:11.486802       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 12:17:23.838420       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.203.81:443: connect: connection refused
	E1018 12:17:23.838505       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.203.81:443: connect: connection refused" logger="UnhandledError"
	W1018 12:17:23.842269       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.203.81:443: connect: connection refused
	E1018 12:17:23.842312       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.203.81:443: connect: connection refused" logger="UnhandledError"
	W1018 12:17:23.941066       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.203.81:443: connect: connection refused
	E1018 12:17:23.941114       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.203.81:443: connect: connection refused" logger="UnhandledError"
	W1018 12:17:39.274153       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 12:17:39.274222       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 12:17:39.275478       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.21.93:443: connect: connection refused" logger="UnhandledError"
	E1018 12:17:39.276379       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.21.93:443: connect: connection refused" logger="UnhandledError"
	E1018 12:17:39.281482       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.21.93:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.21.93:443: connect: connection refused" logger="UnhandledError"
	I1018 12:17:39.445465       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 12:18:50.635868       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43876: use of closed network connection
	E1018 12:18:50.857466       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43902: use of closed network connection
	
	
	==> kube-controller-manager [4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b] <==
	I1018 12:16:41.405782       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 12:16:41.410996       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:16:41.418201       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:16:41.418229       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:16:41.418237       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:16:41.426985       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:16:41.427483       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:16:41.427716       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:16:41.430871       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 12:16:41.434368       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 12:16:41.434440       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 12:16:41.434471       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:16:41.434476       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:16:41.434485       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:16:41.438586       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:16:41.443974       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-206214" podCIDRs=["10.244.0.0/24"]
	E1018 12:16:46.562503       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 12:17:11.389517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 12:17:11.389664       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 12:17:11.389714       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 12:17:11.429761       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 12:17:11.436887       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 12:17:11.490187       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:17:11.537951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:17:26.355100       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f] <==
	I1018 12:16:43.257350       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:16:43.361288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:16:43.462264       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:16:43.462302       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:16:43.462379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:16:43.506945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:16:43.507007       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:16:43.608898       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:16:43.609296       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:16:43.609320       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:43.615194       1 config.go:200] "Starting service config controller"
	I1018 12:16:43.615215       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:16:43.615232       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:16:43.615237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:16:43.615248       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:16:43.615252       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:16:43.617307       1 config.go:309] "Starting node config controller"
	I1018 12:16:43.617321       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:16:43.617328       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:16:43.719062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:16:43.719103       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:16:43.719151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001] <==
	I1018 12:16:35.017517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:35.022129       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:16:35.022241       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 12:16:35.025861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:16:35.026349       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:16:35.026760       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:16:35.033338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:16:35.038911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:35.039132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:16:35.040768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:35.043248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:16:35.043504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:16:35.043587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:16:35.044270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:35.044374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:16:35.044455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:35.044547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:16:35.044677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:35.044793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:35.044868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:16:35.045031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:16:35.047105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:16:35.047281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:35.047421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1018 12:16:36.023192       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:19:19 addons-206214 kubelet[1279]: I1018 12:19:19.856955    1279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af948ecefa78308640e5160a4dea6c5a79f87e73f85feceaac7027c38ca2ffc"
	Oct 18 12:19:19 addons-206214 kubelet[1279]: E1018 12:19:19.859022    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-206214\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-206214' and this object" podUID="438e4167-a2a9-4688-aa7c-82a429bff458" pod="default/test-local-path"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: E1018 12:19:20.048387    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-206214\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-206214' and this object" podUID="438e4167-a2a9-4688-aa7c-82a429bff458" pod="default/test-local-path"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: I1018 12:19:20.216664    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c46bbd1a-f3fd-401e-825c-078350e05aa8-script\") pod \"helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") " pod="local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: I1018 12:19:20.216774    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-data\") pod \"helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") " pod="local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: I1018 12:19:20.216837    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rhw\" (UniqueName: \"kubernetes.io/projected/c46bbd1a-f3fd-401e-825c-078350e05aa8-kube-api-access-t6rhw\") pod \"helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") " pod="local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: I1018 12:19:20.216885    1279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-gcp-creds\") pod \"helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") " pod="local-path-storage/helper-pod-delete-pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: I1018 12:19:20.874563    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438e4167-a2a9-4688-aa7c-82a429bff458" path="/var/lib/kubelet/pods/438e4167-a2a9-4688-aa7c-82a429bff458/volumes"
	Oct 18 12:19:20 addons-206214 kubelet[1279]: E1018 12:19:20.884647    1279 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-206214\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-206214' and this object" podUID="438e4167-a2a9-4688-aa7c-82a429bff458" pod="default/test-local-path"
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.544646    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-gcp-creds\") pod \"c46bbd1a-f3fd-401e-825c-078350e05aa8\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") "
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.544763    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c46bbd1a-f3fd-401e-825c-078350e05aa8-script\") pod \"c46bbd1a-f3fd-401e-825c-078350e05aa8\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") "
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.544792    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6rhw\" (UniqueName: \"kubernetes.io/projected/c46bbd1a-f3fd-401e-825c-078350e05aa8-kube-api-access-t6rhw\") pod \"c46bbd1a-f3fd-401e-825c-078350e05aa8\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") "
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.544818    1279 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-data\") pod \"c46bbd1a-f3fd-401e-825c-078350e05aa8\" (UID: \"c46bbd1a-f3fd-401e-825c-078350e05aa8\") "
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.544977    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-data" (OuterVolumeSpecName: "data") pod "c46bbd1a-f3fd-401e-825c-078350e05aa8" (UID: "c46bbd1a-f3fd-401e-825c-078350e05aa8"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.545007    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c46bbd1a-f3fd-401e-825c-078350e05aa8" (UID: "c46bbd1a-f3fd-401e-825c-078350e05aa8"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.545302    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46bbd1a-f3fd-401e-825c-078350e05aa8-script" (OuterVolumeSpecName: "script") pod "c46bbd1a-f3fd-401e-825c-078350e05aa8" (UID: "c46bbd1a-f3fd-401e-825c-078350e05aa8"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.547699    1279 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46bbd1a-f3fd-401e-825c-078350e05aa8-kube-api-access-t6rhw" (OuterVolumeSpecName: "kube-api-access-t6rhw") pod "c46bbd1a-f3fd-401e-825c-078350e05aa8" (UID: "c46bbd1a-f3fd-401e-825c-078350e05aa8"). InnerVolumeSpecName "kube-api-access-t6rhw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.646227    1279 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-gcp-creds\") on node \"addons-206214\" DevicePath \"\""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.646274    1279 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c46bbd1a-f3fd-401e-825c-078350e05aa8-script\") on node \"addons-206214\" DevicePath \"\""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.646287    1279 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6rhw\" (UniqueName: \"kubernetes.io/projected/c46bbd1a-f3fd-401e-825c-078350e05aa8-kube-api-access-t6rhw\") on node \"addons-206214\" DevicePath \"\""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.646300    1279 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c46bbd1a-f3fd-401e-825c-078350e05aa8-data\") on node \"addons-206214\" DevicePath \"\""
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.876493    1279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92b66e611bc1f20e7f106f3e0f96a652ca6d6dee75af399ca6ada3a6b16b5469"
	Oct 18 12:19:23 addons-206214 kubelet[1279]: I1018 12:19:23.893233    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=1.815474773 podStartE2EDuration="6.893210235s" podCreationTimestamp="2025-10-18 12:19:17 +0000 UTC" firstStartedPulling="2025-10-18 12:19:18.65921717 +0000 UTC m=+161.965808792" lastFinishedPulling="2025-10-18 12:19:23.73695264 +0000 UTC m=+167.043544254" observedRunningTime="2025-10-18 12:19:23.891454737 +0000 UTC m=+167.198046359" watchObservedRunningTime="2025-10-18 12:19:23.893210235 +0000 UTC m=+167.199801857"
	Oct 18 12:19:24 addons-206214 kubelet[1279]: I1018 12:19:24.871792    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46bbd1a-f3fd-401e-825c-078350e05aa8" path="/var/lib/kubelet/pods/c46bbd1a-f3fd-401e-825c-078350e05aa8/volumes"
	Oct 18 12:19:26 addons-206214 kubelet[1279]: E1018 12:19:26.819995    1279 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-46n6w" podUID="42f8d1bb-d8fb-46f3-b38b-4b30a61b5fa3"
	
	
	==> storage-provisioner [0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39] <==
	W1018 12:19:03.608626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:05.612166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:05.618879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:07.623452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:07.633562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:09.636413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:09.641358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:11.644364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:11.651402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:13.654261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:13.659910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:15.663942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:15.672514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.676185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:17.681211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:19.693106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:19.704325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:21.707073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:21.713209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:23.716613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:23.724344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:25.728077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:25.732919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:27.737051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:19:27.744881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-206214 -n addons-206214
helpers_test.go:269: (dbg) Run:  kubectl --context addons-206214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v registry-creds-764b6fb674-46n6w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-206214 describe pod ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v registry-creds-764b6fb674-46n6w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-206214 describe pod ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v registry-creds-764b6fb674-46n6w: exit status 1 (94.967962ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-v7rd7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qtz2v" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-46n6w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-206214 describe pod ingress-nginx-admission-create-v7rd7 ingress-nginx-admission-patch-qtz2v registry-creds-764b6fb674-46n6w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable headlamp --alsologtostderr -v=1: exit status 11 (307.64488ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:29.902269  844543 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:29.903075  844543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:29.903090  844543 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:29.903095  844543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:29.903362  844543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:29.906269  844543 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:29.906650  844543 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:29.906677  844543 addons.go:606] checking whether the cluster is paused
	I1018 12:19:29.906784  844543 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:29.906799  844543 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:29.907279  844543 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:29.924676  844543 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:29.924736  844543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:29.942769  844543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:30.084142  844543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:30.084291  844543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:30.120138  844543 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:30.120190  844543 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:30.120196  844543 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:30.120208  844543 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:30.120211  844543 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:30.120216  844543 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:30.120219  844543 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:30.120223  844543 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:30.120226  844543 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:30.120232  844543 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:30.120236  844543 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:30.120239  844543 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:30.120246  844543 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:30.120250  844543 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:30.120253  844543 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:30.120260  844543 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:30.120264  844543 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:30.120268  844543 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:30.120272  844543 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:30.120275  844543 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:30.120279  844543 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:30.120283  844543 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:30.120286  844543 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:30.120289  844543 cri.go:89] found id: ""
	I1018 12:19:30.120353  844543 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:30.139149  844543 out.go:203] 
	W1018 12:19:30.142617  844543 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:30.142654  844543 out.go:285] * 
	* 
	W1018 12:19:30.149180  844543 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:30.153050  844543 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-xt4gl" [ec19586f-048e-4c0c-8c44-ee0e503529f4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003752131s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (272.199073ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:26.452795  844017 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:26.453855  844017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:26.453871  844017 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:26.453877  844017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:26.454163  844017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:26.454494  844017 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:26.454860  844017 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:26.454884  844017 addons.go:606] checking whether the cluster is paused
	I1018 12:19:26.454987  844017 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:26.455004  844017 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:26.455527  844017 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:26.475876  844017 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:26.475938  844017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:26.496703  844017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:26.602412  844017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:26.602506  844017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:26.637880  844017 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:26.637908  844017 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:26.637914  844017 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:26.637918  844017 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:26.637921  844017 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:26.637928  844017 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:26.637931  844017 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:26.637934  844017 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:26.637938  844017 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:26.637948  844017 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:26.637957  844017 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:26.637960  844017 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:26.637964  844017 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:26.637967  844017 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:26.637970  844017 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:26.637978  844017 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:26.637986  844017 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:26.637990  844017 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:26.637994  844017 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:26.637997  844017 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:26.638000  844017 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:26.638004  844017 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:26.638007  844017 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:26.638010  844017 cri.go:89] found id: ""
	I1018 12:19:26.638068  844017 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:26.653771  844017 out.go:203] 
	W1018 12:19:26.656661  844017 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:26.656692  844017 out.go:285] * 
	* 
	W1018 12:19:26.663233  844017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:26.666306  844017 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-206214 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-206214 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-206214 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [438e4167-a2a9-4688-aa7c-82a429bff458] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [438e4167-a2a9-4688-aa7c-82a429bff458] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [438e4167-a2a9-4688-aa7c-82a429bff458] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004313676s
addons_test.go:967: (dbg) Run:  kubectl --context addons-206214 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 ssh "cat /opt/local-path-provisioner/pvc-7c09a3f7-f771-470c-9722-a5d9a87f5621_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-206214 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-206214 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (357.988208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:20.116613  843809 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:20.119951  843809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:20.119973  843809 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:20.119980  843809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:20.120295  843809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:20.120633  843809 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:20.121000  843809 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.121030  843809 addons.go:606] checking whether the cluster is paused
	I1018 12:19:20.121137  843809 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:20.121155  843809 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:20.121645  843809 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:20.147451  843809 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:20.147508  843809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:20.175926  843809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:20.292433  843809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:20.292530  843809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:20.344280  843809 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:20.344310  843809 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:20.344317  843809 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:20.344321  843809 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:20.344324  843809 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:20.344328  843809 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:20.344331  843809 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:20.344334  843809 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:20.344338  843809 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:20.344344  843809 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:20.344348  843809 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:20.344351  843809 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:20.344355  843809 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:20.344358  843809 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:20.344362  843809 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:20.344368  843809 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:20.344375  843809 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:20.344378  843809 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:20.344383  843809 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:20.344386  843809 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:20.344391  843809 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:20.344401  843809 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:20.344404  843809 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:20.344411  843809 cri.go:89] found id: ""
	I1018 12:19:20.344478  843809 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:20.370956  843809 out.go:203] 
	W1018 12:19:20.374103  843809 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:20.374132  843809 out.go:285] * 
	* 
	W1018 12:19:20.380630  843809 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:20.383862  843809 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-k8hvk" [4e54500d-15da-4497-a8dc-cbc3371b487a] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003637923s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (265.701002ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:19:11.397411  843412 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:19:11.399034  843412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:11.399093  843412 out.go:374] Setting ErrFile to fd 2...
	I1018 12:19:11.399115  843412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:19:11.399421  843412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:19:11.399878  843412 mustload.go:65] Loading cluster: addons-206214
	I1018 12:19:11.400358  843412 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:11.400416  843412 addons.go:606] checking whether the cluster is paused
	I1018 12:19:11.400565  843412 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:19:11.400598  843412 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:19:11.401121  843412 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:19:11.419059  843412 ssh_runner.go:195] Run: systemctl --version
	I1018 12:19:11.419120  843412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:19:11.440840  843412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:19:11.546450  843412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:19:11.546545  843412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:19:11.575415  843412 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:19:11.575448  843412 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:19:11.575454  843412 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:19:11.575457  843412 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:19:11.575461  843412 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:19:11.575465  843412 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:19:11.575468  843412 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:19:11.575472  843412 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:19:11.575475  843412 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:19:11.575482  843412 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:19:11.575489  843412 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:19:11.575492  843412 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:19:11.575495  843412 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:19:11.575498  843412 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:19:11.575501  843412 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:19:11.575507  843412 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:19:11.575512  843412 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:19:11.575516  843412 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:19:11.575519  843412 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:19:11.575522  843412 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:19:11.575526  843412 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:19:11.575530  843412 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:19:11.575532  843412 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:19:11.575535  843412 cri.go:89] found id: ""
	I1018 12:19:11.575585  843412 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:19:11.590385  843412 out.go:203] 
	W1018 12:19:11.593416  843412 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:19:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:19:11.593440  843412 out.go:285] * 
	* 
	W1018 12:19:11.599835  843412 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:19:11.602959  843412 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8zhf4" [69d46f26-a391-4ad8-b6f8-4b503e599835] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003107975s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-206214 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-206214 addons disable yakd --alsologtostderr -v=1: exit status 11 (284.982913ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:18:57.302098  843098 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:18:57.302993  843098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:57.303046  843098 out.go:374] Setting ErrFile to fd 2...
	I1018 12:18:57.303067  843098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:18:57.303362  843098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:18:57.303802  843098 mustload.go:65] Loading cluster: addons-206214
	I1018 12:18:57.304274  843098 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:57.304326  843098 addons.go:606] checking whether the cluster is paused
	I1018 12:18:57.304537  843098 config.go:182] Loaded profile config "addons-206214": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:18:57.304573  843098 host.go:66] Checking if "addons-206214" exists ...
	I1018 12:18:57.305433  843098 cli_runner.go:164] Run: docker container inspect addons-206214 --format={{.State.Status}}
	I1018 12:18:57.323769  843098 ssh_runner.go:195] Run: systemctl --version
	I1018 12:18:57.323833  843098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-206214
	I1018 12:18:57.341035  843098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/addons-206214/id_rsa Username:docker}
	I1018 12:18:57.461098  843098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:18:57.461216  843098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:18:57.497892  843098 cri.go:89] found id: "5b76cd93740ab08e4600a9e6ee5887046afb337ffb58b644f8c463d6a1486346"
	I1018 12:18:57.497917  843098 cri.go:89] found id: "98f3833f9be119b2928bc1e6a45b7b4da3978f97c39252cf875172260d4ccfb0"
	I1018 12:18:57.497923  843098 cri.go:89] found id: "45adaaa4d79057062d07d325e3a1390cec161349e88757d113f0ca77257eb0b1"
	I1018 12:18:57.497926  843098 cri.go:89] found id: "f5ac90f527a670189e8c10a2cb0f1719d2235bab7fd5241396177cc69cd6715e"
	I1018 12:18:57.497932  843098 cri.go:89] found id: "5dc40e4564be466dd57febacf376d48aeaad71eead7aa34a0b1987aecef7180d"
	I1018 12:18:57.497936  843098 cri.go:89] found id: "a32692f08d633b2b3140df97801b26bcbdb99965fe1c96bd121479e8675bc079"
	I1018 12:18:57.497942  843098 cri.go:89] found id: "16bf9cff8859271d648ea0b79d36fc791d20266b71f491a69e527eeed6266191"
	I1018 12:18:57.497945  843098 cri.go:89] found id: "296399ec57fb6ef6deb84dac19e03f93d4328932e0f9491439bf5999176bda30"
	I1018 12:18:57.497949  843098 cri.go:89] found id: "6ce61cd446801a7540934a684ced5b59e62ee8299908d30634b3e5d6f7313de5"
	I1018 12:18:57.497956  843098 cri.go:89] found id: "514d718d40ef1389125cb0edf6bdb1f9a26a8f5ffdd976347d7593b8080ce001"
	I1018 12:18:57.497960  843098 cri.go:89] found id: "1f1880b904fc1e9446946ddc974ec14e95894f085ac0e9434cd9ec0619240926"
	I1018 12:18:57.497964  843098 cri.go:89] found id: "119f93a0bf370d41f5c13af5e1eaa9cb81d94bde3111969f1e184eaf422b3e4b"
	I1018 12:18:57.497967  843098 cri.go:89] found id: "640a2e84493b8baa0b5ea9006ad58b0ec53c957d9b8a59c79fac898bcabd55bc"
	I1018 12:18:57.497972  843098 cri.go:89] found id: "b417690dc2872cafa955441843c805d20b58b255779caffce06829d44267cdec"
	I1018 12:18:57.497976  843098 cri.go:89] found id: "bc47b235de19a173911b7c028e510fd7fd8fb59ee728f2b580d284a3501f93e7"
	I1018 12:18:57.497981  843098 cri.go:89] found id: "0647083a60005b5854ecbe887291822eecf421f94d1ae479ca3e27e6bd054b39"
	I1018 12:18:57.497985  843098 cri.go:89] found id: "7f3683b181a0b5d3ec8c73f584da608d12fd205b2411b00489b33aa9d7e6df15"
	I1018 12:18:57.497989  843098 cri.go:89] found id: "0cb48535119c4081ee5a0cf53d189605976fd57451d8501d6fa6c838d9726416"
	I1018 12:18:57.497992  843098 cri.go:89] found id: "58409db23c34e9c0af8045b7c87a967b0ba9252a2d9875b9dfac4a60965fd46f"
	I1018 12:18:57.497995  843098 cri.go:89] found id: "6db03b7b7dbcbbceb8bba7cacfd41497e4715b7c3b1ebb3a271c632b1ce2e001"
	I1018 12:18:57.498000  843098 cri.go:89] found id: "4db50608b742df8655f8bb3be796d9aeb0cf0c889f4cee52af60ecc809f5787b"
	I1018 12:18:57.498007  843098 cri.go:89] found id: "cf0330eac63a554ff94545c57ff08cda769310f8434691f658a5f022e829eaf2"
	I1018 12:18:57.498010  843098 cri.go:89] found id: "e5013ec0caf4ee4cb22fd8f1a6f80a3bf3f7f8bf2448e34b4b80ed6b1c737608"
	I1018 12:18:57.498014  843098 cri.go:89] found id: ""
	I1018 12:18:57.498067  843098 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 12:18:57.517651  843098 out.go:203] 
	W1018 12:18:57.521417  843098 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:18:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 12:18:57.521446  843098 out.go:285] * 
	* 
	W1018 12:18:57.527877  843098 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 12:18:57.531449  843098 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-206214 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-767781 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-767781 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-rxlvn" [110c95ae-3b62-475e-ba39-4b7a5e62abc0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 12:26:26.318650  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:28:42.457317  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:29:10.160719  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:33:42.457759  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767781 -n functional-767781
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 12:36:19.653983734 +0000 UTC m=+1245.290544401
functional_test.go:1645: (dbg) Run:  kubectl --context functional-767781 describe po hello-node-connect-7d85dfc575-rxlvn -n default
functional_test.go:1645: (dbg) kubectl --context functional-767781 describe po hello-node-connect-7d85dfc575-rxlvn -n default:
Name:             hello-node-connect-7d85dfc575-rxlvn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-767781/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:26:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqtmr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kqtmr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rxlvn to functional-767781
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-767781 logs hello-node-connect-7d85dfc575-rxlvn -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-767781 logs hello-node-connect-7d85dfc575-rxlvn -n default: exit status 1 (84.835455ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rxlvn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-767781 logs hello-node-connect-7d85dfc575-rxlvn -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-767781 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-rxlvn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-767781/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:26:19 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqtmr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kqtmr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rxlvn to functional-767781
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-767781 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-767781 logs -l app=hello-node-connect: exit status 1 (100.770401ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-rxlvn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-767781 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-767781 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.98.12
IPs:                      10.107.98.12
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32721/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-767781
helpers_test.go:243: (dbg) docker inspect functional-767781:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89",
	        "Created": "2025-10-18T12:23:24.171566585Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 851999,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:23:24.236406297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89/hostname",
	        "HostsPath": "/var/lib/docker/containers/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89/hosts",
	        "LogPath": "/var/lib/docker/containers/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89-json.log",
	        "Name": "/functional-767781",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-767781:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-767781",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89",
	                "LowerDir": "/var/lib/docker/overlay2/fa25d236933bcd9a03d94b40192a99d1c0a52d1094942913b2f2c3355a51b2ba-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa25d236933bcd9a03d94b40192a99d1c0a52d1094942913b2f2c3355a51b2ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa25d236933bcd9a03d94b40192a99d1c0a52d1094942913b2f2c3355a51b2ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa25d236933bcd9a03d94b40192a99d1c0a52d1094942913b2f2c3355a51b2ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-767781",
	                "Source": "/var/lib/docker/volumes/functional-767781/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-767781",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-767781",
	                "name.minikube.sigs.k8s.io": "functional-767781",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c2638c4da406009dc54ae80ed9f64533f172a2a4ada46b6414c5a14a067b7c9",
	            "SandboxKey": "/var/run/docker/netns/5c2638c4da40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33889"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33890"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-767781": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:74:0b:cb:1c:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab1e83ba61f5c922ef06db537a3a622b0952df138aaac40773777a9b90d94a01",
	                    "EndpointID": "f7d3c30496d14a42f2d9b5e88ff424ef01a6cda13fd67c6525361ebef5b2a1d0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-767781",
	                        "2dfc350f4de7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-767781 -n functional-767781
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 logs -n 25: (1.536533776s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-767781 ssh findmnt -T /mount-9p | grep 9p                                                             │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │ 18 Oct 25 12:35 UTC │
	│ ssh            │ functional-767781 ssh -- ls -la /mount-9p                                                                        │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │ 18 Oct 25 12:35 UTC │
	│ ssh            │ functional-767781 ssh sudo umount -f /mount-9p                                                                   │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │                     │
	│ mount          │ -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount1 --alsologtostderr -v=1 │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │                     │
	│ ssh            │ functional-767781 ssh findmnt -T /mount1                                                                         │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │                     │
	│ mount          │ -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount2 --alsologtostderr -v=1 │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │                     │
	│ mount          │ -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount3 --alsologtostderr -v=1 │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │                     │
	│ ssh            │ functional-767781 ssh findmnt -T /mount1                                                                         │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │ 18 Oct 25 12:35 UTC │
	│ ssh            │ functional-767781 ssh findmnt -T /mount2                                                                         │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │ 18 Oct 25 12:35 UTC │
	│ ssh            │ functional-767781 ssh findmnt -T /mount3                                                                         │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:35 UTC │ 18 Oct 25 12:36 UTC │
	│ mount          │ -p functional-767781 --kill=true                                                                                 │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │                     │
	│ start          │ -p functional-767781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio        │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │                     │
	│ start          │ -p functional-767781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                  │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │                     │
	│ start          │ -p functional-767781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio        │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-767781 --alsologtostderr -v=1                                                   │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ update-context │ functional-767781 update-context --alsologtostderr -v=2                                                          │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ update-context │ functional-767781 update-context --alsologtostderr -v=2                                                          │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ update-context │ functional-767781 update-context --alsologtostderr -v=2                                                          │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ image          │ functional-767781 image ls --format short --alsologtostderr                                                      │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ image          │ functional-767781 image ls --format json --alsologtostderr                                                       │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ ssh            │ functional-767781 ssh pgrep buildkitd                                                                            │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │                     │
	│ image          │ functional-767781 image build -t localhost/my-image:functional-767781 testdata/build --alsologtostderr           │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ image          │ functional-767781 image ls                                                                                       │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ image          │ functional-767781 image ls --format yaml --alsologtostderr                                                       │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	│ image          │ functional-767781 image ls --format table --alsologtostderr                                                      │ functional-767781 │ jenkins │ v1.37.0 │ 18 Oct 25 12:36 UTC │ 18 Oct 25 12:36 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:36:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:36:00.991535  863824 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:36:00.991697  863824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:36:00.991709  863824 out.go:374] Setting ErrFile to fd 2...
	I1018 12:36:00.991714  863824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:36:00.992784  863824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:36:00.993255  863824 out.go:368] Setting JSON to false
	I1018 12:36:00.994128  863824 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15513,"bootTime":1760775448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:36:00.994198  863824 start.go:141] virtualization:  
	I1018 12:36:00.997375  863824 out.go:179] * [functional-767781] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 12:36:01.000294  863824 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:36:01.000443  863824 notify.go:220] Checking for updates...
	I1018 12:36:01.010507  863824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:36:01.013468  863824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:36:01.016346  863824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:36:01.019962  863824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:36:01.022984  863824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:36:01.026380  863824 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:36:01.026930  863824 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:36:01.048744  863824 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:36:01.048978  863824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:36:01.119090  863824 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:36:01.109298698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:36:01.119196  863824 docker.go:318] overlay module found
	I1018 12:36:01.122318  863824 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 12:36:01.125174  863824 start.go:305] selected driver: docker
	I1018 12:36:01.125203  863824 start.go:925] validating driver "docker" against &{Name:functional-767781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-767781 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:36:01.125310  863824 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:36:01.128801  863824 out.go:203] 
	W1018 12:36:01.131773  863824 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 12:36:01.134629  863824 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.087363635Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf" id=4358618c-0c50-4e90-8942-ef553f2e422b name=/runtime.v1.ImageService/PullImage
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.08813088Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=26cead82-b7c7-49dc-818d-11284cbbbe27 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.0896326Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=8d1a1c3d-dc60-4031-9f60-aba77093bf4a name=/runtime.v1.ImageService/PullImage
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.097750335Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.098315683Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8a414b29-4187-4899-812c-e74fa57369a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.104695602Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-znl27/kubernetes-dashboard" id=2f8b1511-08d5-4737-aeb8-faf10d9a1c72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.105561302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.110789028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.111145619Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0fe36bd98b78e9b620ca518632b22a754e5a2b7fe837243180284f3dc1d54475/merged/etc/group: no such file or directory"
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.111625042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.130031507Z" level=info msg="Created container 2293ed1e8c7ad277051a9bc2f22c53ae8a3cd5fab2aed1275070a8e59afbdd97: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-znl27/kubernetes-dashboard" id=2f8b1511-08d5-4737-aeb8-faf10d9a1c72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.130920272Z" level=info msg="Starting container: 2293ed1e8c7ad277051a9bc2f22c53ae8a3cd5fab2aed1275070a8e59afbdd97" id=456ea163-3d9f-43f5-90c9-248267269555 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.137428053Z" level=info msg="Started container" PID=6829 containerID=2293ed1e8c7ad277051a9bc2f22c53ae8a3cd5fab2aed1275070a8e59afbdd97 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-znl27/kubernetes-dashboard id=456ea163-3d9f-43f5-90c9-248267269555 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70829977ba8c68a50bf8edf63765dd661e3401bc7263c0d52090dc2cdab490e4
	Oct 18 12:36:07 functional-767781 crio[3570]: time="2025-10-18T12:36:07.365959937Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.255377092Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=8d1a1c3d-dc60-4031-9f60-aba77093bf4a name=/runtime.v1.ImageService/PullImage
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.255984705Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ca7be689-2675-4f67-97bc-69e564adc9c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.259309075Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=501380f0-896f-4cce-a3b1-867e46e91539 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.266339512Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-tlwwg/dashboard-metrics-scraper" id=d80c67f0-adfb-498b-a020-cb436ac29443 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.267196038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.272641498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.272995644Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/465dd7e121e68f9d311540811edbd0414f6ca22e8b481cd9828c79416afc19f2/merged/etc/group: no such file or directory"
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.273459156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.299353328Z" level=info msg="Created container 1bc92b8deb7677a64c86579aebdd0d7db45f1b985606a00ba3268b07d334f100: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-tlwwg/dashboard-metrics-scraper" id=d80c67f0-adfb-498b-a020-cb436ac29443 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.302082294Z" level=info msg="Starting container: 1bc92b8deb7677a64c86579aebdd0d7db45f1b985606a00ba3268b07d334f100" id=ef902bbd-68d0-4a8f-a495-cbf2adfdf994 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:36:08 functional-767781 crio[3570]: time="2025-10-18T12:36:08.305141881Z" level=info msg="Started container" PID=6875 containerID=1bc92b8deb7677a64c86579aebdd0d7db45f1b985606a00ba3268b07d334f100 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-tlwwg/dashboard-metrics-scraper id=ef902bbd-68d0-4a8f-a495-cbf2adfdf994 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e9c21862863cf27fadace87680d4769d161a0a2afb6b5cf1e86ca5b78ebe3c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1bc92b8deb767       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   12 seconds ago      Running             dashboard-metrics-scraper   0                   4e9c21862863c       dashboard-metrics-scraper-77bf4d6c4c-tlwwg   kubernetes-dashboard
	2293ed1e8c7ad       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         14 seconds ago      Running             kubernetes-dashboard        0                   70829977ba8c6       kubernetes-dashboard-855c9754f9-znl27        kubernetes-dashboard
	e638afa3d50b9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              28 seconds ago      Exited              mount-munger                0                   59b5bb11fcc3c       busybox-mount                                default
	4dcd80bbb2ae2       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a                  10 minutes ago      Running             myfrontend                  0                   93fb91ee42482       sp-pod                                       default
	facc0d6253104       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                  10 minutes ago      Running             nginx                       0                   87b4e0a526820       nginx-svc                                    default
	0045ccd020476       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 2                   7f2a2ff9fadc2       kindnet-q8gcp                                kube-system
	0bb5d0892a32a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  2                   2496bb75320cb       kube-proxy-rkvjd                             kube-system
	09ffc9838f45e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         2                   a32069a88832a       storage-provisioner                          kube-system
	d35a9c004f33a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   84ab1781783c2       coredns-66bc5c9577-slqzw                     kube-system
	eabf92be16f40       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   c5d9cbba25260       kube-apiserver-functional-767781             kube-system
	da0e27645bbb1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              2                   79d71304dab27       kube-scheduler-functional-767781             kube-system
	fc0ad4cd80c51       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     2                   072d771948f4f       kube-controller-manager-functional-767781    kube-system
	00900f57d455b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        2                   cb9cb7bd85f9b       etcd-functional-767781                       kube-system
	c7258252441b5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Exited              kube-scheduler              1                   79d71304dab27       kube-scheduler-functional-767781             kube-system
	a8a0d59b486ce       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         1                   a32069a88832a       storage-provisioner                          kube-system
	fd3377a937653       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Exited              kube-proxy                  1                   2496bb75320cb       kube-proxy-rkvjd                             kube-system
	c71b7e75fc9b7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Exited              etcd                        1                   cb9cb7bd85f9b       etcd-functional-767781                       kube-system
	747147c0d9b53       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Exited              coredns                     1                   84ab1781783c2       coredns-66bc5c9577-slqzw                     kube-system
	657671924e5fe       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 1                   7f2a2ff9fadc2       kindnet-q8gcp                                kube-system
	da6cdb365bd11       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Exited              kube-controller-manager     1                   072d771948f4f       kube-controller-manager-functional-767781    kube-system
	
	
	==> coredns [747147c0d9b531d06590d1ece2a4c2ce443d4cf3e2af13de417f8f4f8eec9850] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46863 - 24212 "HINFO IN 7665273875307125004.5698793664007393292. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004703399s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d35a9c004f33ab0fb6afee6f53adbbf9d0928075c3a5211be3cdd4d47d38b6b5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59399 - 19841 "HINFO IN 7592827150355177011.630895493330728502. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.033607219s
	
	
	==> describe nodes <==
	Name:               functional-767781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-767781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=functional-767781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_23_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:23:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-767781
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:36:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:36:15 +0000   Sat, 18 Oct 2025 12:23:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:36:15 +0000   Sat, 18 Oct 2025 12:23:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:36:15 +0000   Sat, 18 Oct 2025 12:23:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:36:15 +0000   Sat, 18 Oct 2025 12:24:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-767781
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                33a56ac3-96cd-4724-aede-de35458cedb7
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-drpvk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-rxlvn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-slqzw                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-767781                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-q8gcp                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-767781              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-767781     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-rkvjd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-767781              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-tlwwg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-znl27         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-767781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-767781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-767781 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-767781 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-767781 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-767781 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-767781 event: Registered Node functional-767781 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-767781 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-767781 event: Registered Node functional-767781 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-767781 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-767781 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-767781 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-767781 event: Registered Node functional-767781 in Controller
	
	
	==> dmesg <==
	[  +0.085900] overlayfs: idmapped layers are currently not supported
	[Oct18 10:56] overlayfs: idmapped layers are currently not supported
	[ +18.116656] overlayfs: idmapped layers are currently not supported
	[Oct18 10:58] overlayfs: idmapped layers are currently not supported
	[  +3.156194] overlayfs: idmapped layers are currently not supported
	[Oct18 11:00] overlayfs: idmapped layers are currently not supported
	[Oct18 11:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 11:22] FS-Cache: Duplicate cookie detected
	[  +0.000787] FS-Cache: O-cookie c=00000037 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=00000000f5b34d7b
	[  +0.001120] FS-Cache: O-key=[10] '34323937363632323639'
	[  +0.000787] FS-Cache: N-cookie c=00000038 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=00000000204faf8b
	[  +0.001107] FS-Cache: N-key=[10] '34323937363632323639'
	[Oct18 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 12:16] overlayfs: idmapped layers are currently not supported
	[Oct18 12:22] overlayfs: idmapped layers are currently not supported
	[Oct18 12:23] overlayfs: idmapped layers are currently not supported
	[Oct18 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000048 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001047] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=000000006094aa8a
	[  +0.001123] FS-Cache: O-key=[10] '34323938373639393330'
	[  +0.000853] FS-Cache: N-cookie c=00000049 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=000000001487bd7a
	[  +0.001121] FS-Cache: N-key=[10] '34323938373639393330'
	
	
	==> etcd [00900f57d455b534730053c3d82fcf62d10b33e9d24d6571a500124967bd8779] <==
	{"level":"warn","ts":"2025-10-18T12:25:10.064659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.088289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.104451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.136854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.144388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.155807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.175721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.196226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.227375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.229835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.258243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.274995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.292061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.305945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.325677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.341070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.366763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.386283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.411042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.433375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.452912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:25:10.546418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:35:09.250313Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1130}
	{"level":"info","ts":"2025-10-18T12:35:09.273769Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1130,"took":"23.104252ms","hash":277226525,"current-db-size-bytes":3211264,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1437696,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-18T12:35:09.273821Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":277226525,"revision":1130,"compact-revision":-1}
	
	
	==> etcd [c71b7e75fc9b72c9699ec2daa6e0954350ed3e15e76b4c14097a22690ed3de6a] <==
	{"level":"warn","ts":"2025-10-18T12:24:25.306493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:25.340392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:25.352741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:25.375046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:25.395007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:25.409257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:24:25.534569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:24:47.231324Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:24:47.231436Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-767781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T12:24:47.231562Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:24:47.367465Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:24:47.367549Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:24:47.367570Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T12:24:47.367624Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:24:47.367708Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:24:47.367744Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:24:47.367753Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:24:47.367812Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:24:47.367878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:24:47.367918Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:24:47.367953Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:24:47.371470Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T12:24:47.371557Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:24:47.371586Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T12:24:47.371601Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-767781","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 12:36:21 up  4:18,  0 user,  load average: 0.98, 0.72, 1.66
	Linux functional-767781 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0045ccd02047603a2bf5f4f1ea40cc2bff1227ae55b443b6180efd4421804a98] <==
	I1018 12:34:12.912607       1 main.go:301] handling current node
	I1018 12:34:22.912237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:34:22.912275       1 main.go:301] handling current node
	I1018 12:34:32.912710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:34:32.912744       1 main.go:301] handling current node
	I1018 12:34:42.912166       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:34:42.912202       1 main.go:301] handling current node
	I1018 12:34:52.919829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:34:52.919864       1 main.go:301] handling current node
	I1018 12:35:02.912840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:35:02.912874       1 main.go:301] handling current node
	I1018 12:35:12.913552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:35:12.913673       1 main.go:301] handling current node
	I1018 12:35:22.917922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:35:22.917975       1 main.go:301] handling current node
	I1018 12:35:32.912625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:35:32.912661       1 main.go:301] handling current node
	I1018 12:35:42.919745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:35:42.919866       1 main.go:301] handling current node
	I1018 12:35:52.912871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:35:52.912904       1 main.go:301] handling current node
	I1018 12:36:02.913135       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:36:02.913267       1 main.go:301] handling current node
	I1018 12:36:12.912969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:36:12.912998       1 main.go:301] handling current node
	
	
	==> kindnet [657671924e5fe8ff610b3e71350e000b980ca3687deb0202c7c767fb8dd7780d] <==
	I1018 12:24:22.630023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:24:22.630446       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 12:24:22.630605       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:24:22.630617       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:24:22.630627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:24:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:24:22.817059       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:24:22.823768       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:24:22.823843       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:24:22.824832       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:24:26.725719       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:24:26.725768       1 metrics.go:72] Registering metrics
	I1018 12:24:26.725823       1 controller.go:711] "Syncing nftables rules"
	I1018 12:24:32.816794       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:24:32.816856       1 main.go:301] handling current node
	I1018 12:24:42.820496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:24:42.820530       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eabf92be16f40bef0a0298993505b396e6611180fd9c7a86df5924cf635fbfde] <==
	I1018 12:25:11.453275       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:25:11.458781       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:25:11.487835       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:25:11.506206       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 12:25:11.506245       1 policy_source.go:240] refreshing policies
	I1018 12:25:11.557324       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:25:12.149943       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:25:12.251387       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1018 12:25:12.699094       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 12:25:12.705129       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:25:13.275179       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:25:13.577636       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:25:13.654875       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:25:13.662858       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:25:14.957698       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:25:15.056599       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:25:30.483302       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.255.185"}
	I1018 12:25:39.657035       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.76.85"}
	I1018 12:25:43.181983       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.11.80"}
	E1018 12:26:18.926609       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50076: use of closed network connection
	I1018 12:26:19.290469       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.107.98.12"}
	I1018 12:35:11.399980       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:36:02.165047       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:36:02.516635       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.8.146"}
	I1018 12:36:02.546819       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.76.224"}
	
	
	==> kube-controller-manager [da6cdb365bd11b335ff8150db5c15fa3f89cb1a7437c03dbc5cf4135601a1896] <==
	I1018 12:24:30.003106       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 12:24:30.003162       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 12:24:30.003189       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 12:24:30.011769       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:24:30.012964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:24:30.014878       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:24:30.017536       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:24:30.018121       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:24:30.040958       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:24:30.041079       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:24:30.041108       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:24:30.041169       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 12:24:30.041361       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:24:30.043258       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 12:24:30.043318       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 12:24:30.043341       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:24:30.043432       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 12:24:30.044188       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 12:24:30.044364       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:24:30.044387       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:24:30.044602       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:24:30.044706       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:24:30.046584       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:24:30.049189       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 12:24:30.051732       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-controller-manager [fc0ad4cd80c519afc5b144c592827c8e2e0f20f3a06189c0f14c4949a1adc8ea] <==
	I1018 12:25:14.784095       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 12:25:14.785929       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 12:25:14.789279       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 12:25:14.795858       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 12:25:14.799330       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:25:14.799612       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:25:14.799744       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:25:14.799762       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:25:14.799786       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:25:14.799919       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:25:14.800241       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 12:25:14.800567       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:25:14.800694       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:25:14.802074       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:25:14.817923       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:25:14.822303       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:25:14.822339       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:25:14.822347       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1018 12:36:02.279439       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:36:02.284456       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:36:02.303485       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:36:02.311629       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:36:02.316992       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:36:02.321157       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:36:02.328270       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0bb5d0892a32a1f465c06b250dba0f7c58de50d0c469ad85346a051a51c8deea] <==
	I1018 12:25:12.780804       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:25:12.897422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:25:12.998899       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:25:12.998933       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:25:12.999001       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:25:13.044968       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:25:13.045091       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:25:13.050077       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:25:13.050451       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:25:13.050672       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:13.052074       1 config.go:200] "Starting service config controller"
	I1018 12:25:13.052459       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:25:13.052527       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:25:13.052580       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:25:13.052693       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:25:13.052735       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:25:13.053532       1 config.go:309] "Starting node config controller"
	I1018 12:25:13.053756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:25:13.053812       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:25:13.153456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:25:13.153490       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:25:13.153539       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fd3377a93765330e309406a19e47bfdc8b39df1054fc78b71745eddd7d8bbee1] <==
	I1018 12:24:25.267701       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:24:25.988667       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 12:24:26.711997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-767781\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 12:24:27.988998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:24:27.989153       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:24:27.989263       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:24:28.015544       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:24:28.015719       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:24:28.020698       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:24:28.021154       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:24:28.021868       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:24:28.055242       1 config.go:200] "Starting service config controller"
	I1018 12:24:28.055348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:24:28.055524       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:24:28.055574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:24:28.055721       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:24:28.055763       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:24:28.057967       1 config.go:309] "Starting node config controller"
	I1018 12:24:28.103861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:24:28.103885       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:24:28.159812       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:24:28.167769       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:24:28.174474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c7258252441b5f05e917d01d206d417b0afb9f903629ed9fadae6ac5596ac336] <==
	I1018 12:24:25.802718       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:24:28.566707       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 12:24:28.566742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:24:28.571570       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 12:24:28.571624       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 12:24:28.571701       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:24:28.571716       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:24:28.571734       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 12:24:28.571745       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 12:24:28.572284       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:24:28.572401       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 12:24:28.673694       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 12:24:28.673753       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 12:24:28.673837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:24:47.217223       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:24:47.217255       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:24:47.217282       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:24:47.217305       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:24:47.217329       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1018 12:24:47.217350       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 12:24:47.217700       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:24:47.217731       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da0e27645bbb18693ead3248d8e11b2be56b60f115374f92f71c0b8f9838be9b] <==
	I1018 12:25:11.339067       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:25:11.346832       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:25:11.347849       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:11.347912       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:25:11.347956       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:25:11.368886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:25:11.368985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:25:11.369061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:25:11.369122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:25:11.369187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:25:11.369250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:25:11.369313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:25:11.369369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:25:11.369424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:25:11.369530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:25:11.369587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:25:11.369637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:25:11.369681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:25:11.369718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:25:11.369774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:25:11.369820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:25:11.369865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:25:11.369906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:25:11.373089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:25:12.550890       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:35:43 functional-767781 kubelet[3884]: E1018 12:35:43.243987    3884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rxlvn" podUID="110c95ae-3b62-475e-ba39-4b7a5e62abc0"
	Oct 18 12:35:50 functional-767781 kubelet[3884]: I1018 12:35:50.163944    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnv6\" (UniqueName: \"kubernetes.io/projected/dad107e0-75a5-4474-a2d5-f54992d76c11-kube-api-access-zmnv6\") pod \"busybox-mount\" (UID: \"dad107e0-75a5-4474-a2d5-f54992d76c11\") " pod="default/busybox-mount"
	Oct 18 12:35:50 functional-767781 kubelet[3884]: I1018 12:35:50.164007    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/dad107e0-75a5-4474-a2d5-f54992d76c11-test-volume\") pod \"busybox-mount\" (UID: \"dad107e0-75a5-4474-a2d5-f54992d76c11\") " pod="default/busybox-mount"
	Oct 18 12:35:50 functional-767781 kubelet[3884]: W1018 12:35:50.316217    3884 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89/crio-59b5bb11fcc3cc9128d2dc3e0d47188c7c965a6bb1b0c9980d92d25f9dd6bf79 WatchSource:0}: Error finding container 59b5bb11fcc3cc9128d2dc3e0d47188c7c965a6bb1b0c9980d92d25f9dd6bf79: Status 404 returned error can't find the container with id 59b5bb11fcc3cc9128d2dc3e0d47188c7c965a6bb1b0c9980d92d25f9dd6bf79
	Oct 18 12:35:51 functional-767781 kubelet[3884]: E1018 12:35:51.242685    3884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-drpvk" podUID="ed4d2652-ae30-48cd-bf47-853eba66bd52"
	Oct 18 12:35:54 functional-767781 kubelet[3884]: I1018 12:35:54.289667    3884 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/dad107e0-75a5-4474-a2d5-f54992d76c11-test-volume\") pod \"dad107e0-75a5-4474-a2d5-f54992d76c11\" (UID: \"dad107e0-75a5-4474-a2d5-f54992d76c11\") "
	Oct 18 12:35:54 functional-767781 kubelet[3884]: I1018 12:35:54.290209    3884 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmnv6\" (UniqueName: \"kubernetes.io/projected/dad107e0-75a5-4474-a2d5-f54992d76c11-kube-api-access-zmnv6\") pod \"dad107e0-75a5-4474-a2d5-f54992d76c11\" (UID: \"dad107e0-75a5-4474-a2d5-f54992d76c11\") "
	Oct 18 12:35:54 functional-767781 kubelet[3884]: I1018 12:35:54.289873    3884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dad107e0-75a5-4474-a2d5-f54992d76c11-test-volume" (OuterVolumeSpecName: "test-volume") pod "dad107e0-75a5-4474-a2d5-f54992d76c11" (UID: "dad107e0-75a5-4474-a2d5-f54992d76c11"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 12:35:54 functional-767781 kubelet[3884]: I1018 12:35:54.293704    3884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad107e0-75a5-4474-a2d5-f54992d76c11-kube-api-access-zmnv6" (OuterVolumeSpecName: "kube-api-access-zmnv6") pod "dad107e0-75a5-4474-a2d5-f54992d76c11" (UID: "dad107e0-75a5-4474-a2d5-f54992d76c11"). InnerVolumeSpecName "kube-api-access-zmnv6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 12:35:54 functional-767781 kubelet[3884]: I1018 12:35:54.390675    3884 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmnv6\" (UniqueName: \"kubernetes.io/projected/dad107e0-75a5-4474-a2d5-f54992d76c11-kube-api-access-zmnv6\") on node \"functional-767781\" DevicePath \"\""
	Oct 18 12:35:54 functional-767781 kubelet[3884]: I1018 12:35:54.390728    3884 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/dad107e0-75a5-4474-a2d5-f54992d76c11-test-volume\") on node \"functional-767781\" DevicePath \"\""
	Oct 18 12:35:55 functional-767781 kubelet[3884]: I1018 12:35:55.156377    3884 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b5bb11fcc3cc9128d2dc3e0d47188c7c965a6bb1b0c9980d92d25f9dd6bf79"
	Oct 18 12:35:58 functional-767781 kubelet[3884]: E1018 12:35:58.242145    3884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rxlvn" podUID="110c95ae-3b62-475e-ba39-4b7a5e62abc0"
	Oct 18 12:36:02 functional-767781 kubelet[3884]: I1018 12:36:02.502205    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11a04656-ddfb-43f4-b6a8-184a1a313543-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-znl27\" (UID: \"11a04656-ddfb-43f4-b6a8-184a1a313543\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-znl27"
	Oct 18 12:36:02 functional-767781 kubelet[3884]: I1018 12:36:02.502728    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b179944d-c7e0-474d-9bef-928a08cae268-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-tlwwg\" (UID: \"b179944d-c7e0-474d-9bef-928a08cae268\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-tlwwg"
	Oct 18 12:36:02 functional-767781 kubelet[3884]: I1018 12:36:02.502844    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xxf\" (UniqueName: \"kubernetes.io/projected/11a04656-ddfb-43f4-b6a8-184a1a313543-kube-api-access-m8xxf\") pod \"kubernetes-dashboard-855c9754f9-znl27\" (UID: \"11a04656-ddfb-43f4-b6a8-184a1a313543\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-znl27"
	Oct 18 12:36:02 functional-767781 kubelet[3884]: I1018 12:36:02.502935    3884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-464dh\" (UniqueName: \"kubernetes.io/projected/b179944d-c7e0-474d-9bef-928a08cae268-kube-api-access-464dh\") pod \"dashboard-metrics-scraper-77bf4d6c4c-tlwwg\" (UID: \"b179944d-c7e0-474d-9bef-928a08cae268\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-tlwwg"
	Oct 18 12:36:02 functional-767781 kubelet[3884]: W1018 12:36:02.738532    3884 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2dfc350f4de733f1443ab06fe6d9e6f9df3bad43e53d3855484e4e5037569c89/crio-4e9c21862863cf27fadace87680d4769d161a0a2afb6b5cf1e86ca5b78ebe3c9 WatchSource:0}: Error finding container 4e9c21862863cf27fadace87680d4769d161a0a2afb6b5cf1e86ca5b78ebe3c9: Status 404 returned error can't find the container with id 4e9c21862863cf27fadace87680d4769d161a0a2afb6b5cf1e86ca5b78ebe3c9
	Oct 18 12:36:06 functional-767781 kubelet[3884]: E1018 12:36:06.242354    3884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-drpvk" podUID="ed4d2652-ae30-48cd-bf47-853eba66bd52"
	Oct 18 12:36:09 functional-767781 kubelet[3884]: I1018 12:36:09.230111    3884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-znl27" podStartSLOduration=2.841944282 podStartE2EDuration="7.230093782s" podCreationTimestamp="2025-10-18 12:36:02 +0000 UTC" firstStartedPulling="2025-10-18 12:36:02.701322006 +0000 UTC m=+655.646328235" lastFinishedPulling="2025-10-18 12:36:07.089471506 +0000 UTC m=+660.034477735" observedRunningTime="2025-10-18 12:36:07.236027396 +0000 UTC m=+660.181033633" watchObservedRunningTime="2025-10-18 12:36:09.230093782 +0000 UTC m=+662.175100019"
	Oct 18 12:36:13 functional-767781 kubelet[3884]: E1018 12:36:13.242004    3884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-rxlvn" podUID="110c95ae-3b62-475e-ba39-4b7a5e62abc0"
	Oct 18 12:36:21 functional-767781 kubelet[3884]: E1018 12:36:21.243724    3884 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 12:36:21 functional-767781 kubelet[3884]: E1018 12:36:21.244220    3884 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 18 12:36:21 functional-767781 kubelet[3884]: E1018 12:36:21.244321    3884 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-drpvk_default(ed4d2652-ae30-48cd-bf47-853eba66bd52): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 18 12:36:21 functional-767781 kubelet[3884]: E1018 12:36:21.244356    3884 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-drpvk" podUID="ed4d2652-ae30-48cd-bf47-853eba66bd52"
	
	
	==> kubernetes-dashboard [2293ed1e8c7ad277051a9bc2f22c53ae8a3cd5fab2aed1275070a8e59afbdd97] <==
	2025/10/18 12:36:07 Starting overwatch
	2025/10/18 12:36:07 Using namespace: kubernetes-dashboard
	2025/10/18 12:36:07 Using in-cluster config to connect to apiserver
	2025/10/18 12:36:07 Using secret token for csrf signing
	2025/10/18 12:36:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 12:36:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 12:36:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 12:36:07 Generating JWE encryption key
	2025/10/18 12:36:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 12:36:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 12:36:07 Initializing JWE encryption key from synchronized object
	2025/10/18 12:36:07 Creating in-cluster Sidecar client
	2025/10/18 12:36:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 12:36:07 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [09ffc9838f45e87fe44e07adaf8bf01c5d1fb9293191c9af9512bce00bcda5c0] <==
	W1018 12:35:57.220329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:35:59.245328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:35:59.254260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:01.260942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:01.266496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:03.270194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:03.274753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:05.277810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:05.282674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:07.286278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:07.291314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:09.294734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:09.299424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:11.303964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:11.313356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:13.317769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:13.323278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:15.326828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:15.334883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:17.338940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:17.345221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:19.348472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:19.354244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:21.367990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:36:21.373584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a8a0d59b486cec7133c9888ade16b08d32a63650d3463c74b0d81cd212c10c50] <==
	I1018 12:24:23.506363       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:24:26.749292       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:24:26.749351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:24:26.771127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:30.226114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:34.489076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:38.089803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:41.143681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:44.165740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:44.173539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:24:44.173782       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:24:44.179270       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-767781_1b97179a-8d70-4eb7-bd40-fa6eb4cc5c10!
	W1018 12:24:44.179952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:24:44.180794       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6478d77c-f6a7-4357-87f7-496117c5f6f6", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-767781_1b97179a-8d70-4eb7-bd40-fa6eb4cc5c10 became leader
	W1018 12:24:44.192042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:24:44.279642       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-767781_1b97179a-8d70-4eb7-bd40-fa6eb4cc5c10!
	W1018 12:24:46.206944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:24:46.216058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767781 -n functional-767781
helpers_test.go:269: (dbg) Run:  kubectl --context functional-767781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-drpvk hello-node-connect-7d85dfc575-rxlvn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-767781 describe pod busybox-mount hello-node-75c85bcc94-drpvk hello-node-connect-7d85dfc575-rxlvn
helpers_test.go:290: (dbg) kubectl --context functional-767781 describe pod busybox-mount hello-node-75c85bcc94-drpvk hello-node-connect-7d85dfc575-rxlvn:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-767781/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:35:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e638afa3d50b92035c588667d9e2b97603bad71526aed86d58a7e29b44524104
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 12:35:52 +0000
	      Finished:     Sat, 18 Oct 2025 12:35:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmnv6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zmnv6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  33s   default-scheduler  Successfully assigned default/busybox-mount to functional-767781
	  Normal  Pulling    32s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     30s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.082s (2.082s including waiting). Image size: 3774172 bytes.
	  Normal  Created    30s   kubelet            Created container: mount-munger
	  Normal  Started    30s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-drpvk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-767781/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:25:39 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dj9d7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dj9d7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-drpvk to functional-767781
	  Normal   Pulling    7m52s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m52s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m52s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    31s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     31s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-rxlvn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-767781/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:26:19 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kqtmr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kqtmr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-rxlvn to functional-767781
	  Normal   Pulling    7m18s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m57s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m57s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image load --daemon kicbase/echo-server:functional-767781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-767781" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image load --daemon kicbase/echo-server:functional-767781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-767781" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-767781
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image load --daemon kicbase/echo-server:functional-767781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-767781" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-767781 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-767781 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-drpvk" [ed4d2652-ae30-48cd-bf47-853eba66bd52] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767781 -n functional-767781
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 12:35:40.02608766 +0000 UTC m=+1205.662648409
functional_test.go:1460: (dbg) Run:  kubectl --context functional-767781 describe po hello-node-75c85bcc94-drpvk -n default
functional_test.go:1460: (dbg) kubectl --context functional-767781 describe po hello-node-75c85bcc94-drpvk -n default:
Name:             hello-node-75c85bcc94-drpvk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-767781/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:25:39 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dj9d7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dj9d7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-drpvk to functional-767781
Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-767781 logs hello-node-75c85bcc94-drpvk -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-767781 logs hello-node-75c85bcc94-drpvk -n default: exit status 1 (105.215175ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-drpvk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-767781 logs hello-node-75c85bcc94-drpvk -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image save kicbase/echo-server:functional-767781 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 12:25:41.168326  859750 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:25:41.169347  859750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:25:41.169359  859750 out.go:374] Setting ErrFile to fd 2...
	I1018 12:25:41.169365  859750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:25:41.169736  859750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:25:41.170641  859750 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:25:41.170791  859750 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:25:41.171280  859750 cli_runner.go:164] Run: docker container inspect functional-767781 --format={{.State.Status}}
	I1018 12:25:41.190020  859750 ssh_runner.go:195] Run: systemctl --version
	I1018 12:25:41.190084  859750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767781
	I1018 12:25:41.209115  859750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/functional-767781/id_rsa Username:docker}
	I1018 12:25:41.318514  859750 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1018 12:25:41.318609  859750 cache_images.go:254] Failed to load cached images for "functional-767781": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1018 12:25:41.318636  859750 cache_images.go:266] failed pushing to: functional-767781

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-767781
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image save --daemon kicbase/echo-server:functional-767781 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-767781
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-767781: exit status 1 (17.870919ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-767781

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-767781

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 service --namespace=default --https --url hello-node: exit status 115 (411.687245ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30770
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-767781 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 service hello-node --url --format={{.IP}}: exit status 115 (455.709424ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-767781 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 service hello-node --url: exit status 115 (453.643915ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30770
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-767781 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30770
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (392.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 12:45:39.669769  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:46:07.388607  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:48:42.457932  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:50:39.669983  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-904693 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m28.933939672s)

                                                
                                                
-- stdout --
	* [ha-904693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-904693" primary control-plane node in "ha-904693" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-904693-m02" control-plane node in "ha-904693" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-904693-m04" worker node in "ha-904693" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:44:25.711916  892123 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:44:25.712088  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712119  892123 out.go:374] Setting ErrFile to fd 2...
	I1018 12:44:25.712138  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712423  892123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:44:25.712837  892123 out.go:368] Setting JSON to false
	I1018 12:44:25.713721  892123 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16018,"bootTime":1760775448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:44:25.713821  892123 start.go:141] virtualization:  
	I1018 12:44:25.719185  892123 out.go:179] * [ha-904693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:44:25.722230  892123 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:44:25.722359  892123 notify.go:220] Checking for updates...
	I1018 12:44:25.728356  892123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:44:25.731393  892123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:25.734246  892123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:44:25.737415  892123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:44:25.740192  892123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:44:25.743783  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:25.744347  892123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:44:25.769253  892123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:44:25.769378  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.830176  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.820847832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.830279  892123 docker.go:318] overlay module found
	I1018 12:44:25.833295  892123 out.go:179] * Using the docker driver based on existing profile
	I1018 12:44:25.836144  892123 start.go:305] selected driver: docker
	I1018 12:44:25.836180  892123 start.go:925] validating driver "docker" against &{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.836325  892123 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:44:25.836440  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.891844  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.88247637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.892307  892123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:44:25.892333  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:25.892393  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:25.892444  892123 start.go:349] cluster config:
	{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.895566  892123 out.go:179] * Starting "ha-904693" primary control-plane node in "ha-904693" cluster
	I1018 12:44:25.898242  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:25.901058  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:25.903961  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:25.904124  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:25.904158  892123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 12:44:25.904169  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:25.904245  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:25.904261  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:25.904405  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:25.923338  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:25.923361  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:25.923378  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:25.923408  892123 start.go:360] acquireMachinesLock for ha-904693: {Name:mk0b11e6cfae1fdc8dfba1eeb3a517fb42d395b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:25.923474  892123 start.go:364] duration metric: took 44.365µs to acquireMachinesLock for "ha-904693"
	I1018 12:44:25.923496  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:25.923506  892123 fix.go:54] fixHost starting: 
	I1018 12:44:25.923797  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:25.940565  892123 fix.go:112] recreateIfNeeded on ha-904693: state=Stopped err=<nil>
	W1018 12:44:25.940596  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:25.943864  892123 out.go:252] * Restarting existing docker container for "ha-904693" ...
	I1018 12:44:25.943958  892123 cli_runner.go:164] Run: docker start ha-904693
	I1018 12:44:26.194711  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:26.215813  892123 kic.go:430] container "ha-904693" state is running.
	I1018 12:44:26.216371  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:26.239035  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:26.240781  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:26.240964  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:26.264332  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:26.264643  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:26.264652  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:26.265571  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:29.415325  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.415348  892123 ubuntu.go:182] provisioning hostname "ha-904693"
	I1018 12:44:29.415411  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.433529  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.433861  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.433879  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693 && echo "ha-904693" | sudo tee /etc/hostname
	I1018 12:44:29.588755  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.588848  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.609700  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.610004  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.610025  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:29.760098  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:29.760127  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:29.760148  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:29.760157  892123 provision.go:84] configureAuth start
	I1018 12:44:29.760217  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:29.777989  892123 provision.go:143] copyHostCerts
	I1018 12:44:29.778029  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778061  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:29.778077  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778149  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:29.778226  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778242  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:29.778247  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778271  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:29.778308  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778329  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:29.778333  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778355  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:29.778399  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693 san=[127.0.0.1 192.168.49.2 ha-904693 localhost minikube]
	I1018 12:44:31.047109  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:31.047193  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:31.047278  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.066067  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.172668  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:31.172743  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 12:44:31.191530  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:31.191692  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:31.211233  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:31.211300  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:31.230446  892123 provision.go:87] duration metric: took 1.47026349s to configureAuth
	I1018 12:44:31.230476  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:31.230724  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:31.230839  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.248755  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:31.249077  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:31.249098  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:31.576103  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:31.576129  892123 machine.go:96] duration metric: took 5.335328605s to provisionDockerMachine
	I1018 12:44:31.576140  892123 start.go:293] postStartSetup for "ha-904693" (driver="docker")
	I1018 12:44:31.576162  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:31.576224  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:31.576268  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.597908  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.707679  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:31.711002  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:31.711071  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:31.711090  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:31.711155  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:31.711247  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:31.711259  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:31.711355  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:31.718886  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:31.736340  892123 start.go:296] duration metric: took 160.184199ms for postStartSetup
	I1018 12:44:31.736438  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:31.736480  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.754046  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.853280  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:31.858215  892123 fix.go:56] duration metric: took 5.934701373s for fixHost
	I1018 12:44:31.858243  892123 start.go:83] releasing machines lock for "ha-904693", held for 5.934757012s
	I1018 12:44:31.858326  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:31.875758  892123 ssh_runner.go:195] Run: cat /version.json
	I1018 12:44:31.875830  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.875893  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:31.875954  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.896371  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.899369  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:32.089885  892123 ssh_runner.go:195] Run: systemctl --version
	I1018 12:44:32.096829  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:32.132460  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:32.136865  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:32.136993  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:32.144884  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:32.144907  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:32.144959  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:32.145021  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:32.160437  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:32.173683  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:32.173774  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:32.189773  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:32.203204  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:32.313641  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:32.432880  892123 docker.go:234] disabling docker service ...
	I1018 12:44:32.432958  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:32.449965  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:32.464069  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:32.584779  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:32.701524  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:32.716906  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:32.732220  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:32.732290  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.741629  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:32.741721  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.750956  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.760523  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.769646  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:32.777805  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.786814  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.795384  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.804860  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:32.812429  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:32.820169  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:32.933627  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:44:33.073156  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:44:33.073243  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:44:33.077339  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:44:33.077414  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:44:33.081817  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:44:33.111160  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:44:33.111248  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.140441  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.172376  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:44:33.175295  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:44:33.191834  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:44:33.195889  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.206059  892123 kubeadm.go:883] updating cluster {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:44:33.206251  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:33.206309  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.242225  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.242255  892123 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:44:33.242314  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.268715  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.268738  892123 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:44:33.268746  892123 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 12:44:33.268859  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:44:33.268940  892123 ssh_runner.go:195] Run: crio config
	I1018 12:44:33.339264  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:33.339288  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:33.339305  892123 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:44:33.339328  892123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-904693 NodeName:ha-904693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:44:33.339459  892123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-904693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:44:33.339481  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:44:33.339539  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:44:33.352416  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:33.352526  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:44:33.352590  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:44:33.360442  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:44:33.360534  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 12:44:33.368315  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 12:44:33.381459  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:44:33.394655  892123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 12:44:33.407827  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:44:33.421345  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:44:33.425393  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.435521  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:33.547456  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:44:33.571606  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.2
	I1018 12:44:33.571630  892123 certs.go:195] generating shared ca certs ...
	I1018 12:44:33.571647  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:33.571882  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:44:33.572004  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:44:33.572021  892123 certs.go:257] generating profile certs ...
	I1018 12:44:33.572109  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:44:33.572141  892123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44
	I1018 12:44:33.572159  892123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1018 12:44:34.089841  892123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 ...
	I1018 12:44:34.089879  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44: {Name:mk73ee01371c8601ccdf153e68cf18fb41b0caf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090092  892123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 ...
	I1018 12:44:34.090109  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44: {Name:mkc407effae516c519c94bd817f4f88bdad85974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090201  892123 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt
	I1018 12:44:34.090356  892123 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key
	I1018 12:44:34.090505  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:44:34.090525  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:44:34.090542  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:44:34.090563  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:44:34.090582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:44:34.090598  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:44:34.090617  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:44:34.090634  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:44:34.090652  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:44:34.090706  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:44:34.090745  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:44:34.090766  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:44:34.090802  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:44:34.090831  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:44:34.090865  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:44:34.090911  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:34.090942  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.090959  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.090975  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.091691  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:44:34.111143  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:44:34.130224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:44:34.147895  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:44:34.166568  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:44:34.191542  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:44:34.218375  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:44:34.243094  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:44:34.264702  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:44:34.290199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:44:34.313998  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:44:34.341991  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:44:34.361379  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:44:34.380056  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:44:34.400140  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409637  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409718  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.514177  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:44:34.526963  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:44:34.541968  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546450  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546529  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.608344  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:44:34.616770  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:44:34.627781  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635676  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635755  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.691087  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:44:34.700436  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:44:34.704339  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:44:34.762289  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:44:34.835373  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:44:34.908492  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:44:34.968701  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:44:35.018893  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:44:35.074866  892123 kubeadm.go:400] StartCluster: {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:35.075012  892123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:44:35.075100  892123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:44:35.116413  892123 cri.go:89] found id: "f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d"
	I1018 12:44:35.116441  892123 cri.go:89] found id: "adda974732675bf5434d1d2f50dcf1a62d7e89e192480dcbb5a9ffec2ab87ea9"
	I1018 12:44:35.116447  892123 cri.go:89] found id: "10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de"
	I1018 12:44:35.116470  892123 cri.go:89] found id: "2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af"
	I1018 12:44:35.116474  892123 cri.go:89] found id: "bb134bdda02b2b1865dbf7bfd965c0d86f8c2b7ee0818669fb4f4cfd3f5f8484"
	I1018 12:44:35.116478  892123 cri.go:89] found id: ""
	I1018 12:44:35.116537  892123 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:44:35.135127  892123 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:44:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:44:35.135230  892123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:44:35.147730  892123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:44:35.147766  892123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:44:35.147824  892123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:44:35.157524  892123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:35.158025  892123 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-904693" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.158160  892123 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "ha-904693" cluster setting kubeconfig missing "ha-904693" context setting]
	I1018 12:44:35.158473  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.159101  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:44:35.159857  892123 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 12:44:35.159896  892123 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 12:44:35.159940  892123 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 12:44:35.159949  892123 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 12:44:35.159955  892123 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 12:44:35.159960  892123 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 12:44:35.160422  892123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:44:35.173010  892123 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 12:44:35.173040  892123 kubeadm.go:601] duration metric: took 25.265992ms to restartPrimaryControlPlane
	I1018 12:44:35.173050  892123 kubeadm.go:402] duration metric: took 98.194754ms to StartCluster
	I1018 12:44:35.173077  892123 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.173159  892123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.173840  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.174085  892123 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:44:35.174116  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:44:35.174143  892123 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:44:35.174720  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.180626  892123 out.go:179] * Enabled addons: 
	I1018 12:44:35.183765  892123 addons.go:514] duration metric: took 9.629337ms for enable addons: enabled=[]
	I1018 12:44:35.183834  892123 start.go:246] waiting for cluster config update ...
	I1018 12:44:35.183849  892123 start.go:255] writing updated cluster config ...
	I1018 12:44:35.186931  892123 out.go:203] 
	I1018 12:44:35.190015  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.190154  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.193614  892123 out.go:179] * Starting "ha-904693-m02" control-plane node in "ha-904693" cluster
	I1018 12:44:35.196414  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:35.199358  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:35.202336  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:35.202376  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:35.202494  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:35.202510  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:35.202646  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.202901  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:35.244427  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:35.244451  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:35.244465  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:35.244491  892123 start.go:360] acquireMachinesLock for ha-904693-m02: {Name:mk6c2f485a3713f332b20d1d9fdf103954df7ac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:35.244553  892123 start.go:364] duration metric: took 42.085µs to acquireMachinesLock for "ha-904693-m02"
	I1018 12:44:35.244578  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:35.244587  892123 fix.go:54] fixHost starting: m02
	I1018 12:44:35.244844  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.277624  892123 fix.go:112] recreateIfNeeded on ha-904693-m02: state=Stopped err=<nil>
	W1018 12:44:35.277652  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:35.280995  892123 out.go:252] * Restarting existing docker container for "ha-904693-m02" ...
	I1018 12:44:35.281088  892123 cli_runner.go:164] Run: docker start ha-904693-m02
	I1018 12:44:35.680444  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.711547  892123 kic.go:430] container "ha-904693-m02" state is running.
	I1018 12:44:35.711981  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:35.739312  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.739556  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:35.739755  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:35.771422  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:35.771751  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:35.771766  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:35.772400  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:39.052293  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.052316  892123 ubuntu.go:182] provisioning hostname "ha-904693-m02"
	I1018 12:44:39.052382  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.080876  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.081188  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.081199  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m02 && echo "ha-904693-m02" | sudo tee /etc/hostname
	I1018 12:44:39.340056  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.340143  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.373338  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.373649  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.373672  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:39.630504  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:39.630578  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:39.630612  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:39.630652  892123 provision.go:84] configureAuth start
	I1018 12:44:39.630734  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:39.675738  892123 provision.go:143] copyHostCerts
	I1018 12:44:39.675784  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675817  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:39.675825  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675904  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:39.675996  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676014  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:39.676020  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676047  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:39.676086  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676101  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:39.676105  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676126  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:39.676170  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m02 san=[127.0.0.1 192.168.49.3 ha-904693-m02 localhost minikube]
	I1018 12:44:40.218129  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:40.218244  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:40.218322  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.236440  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:40.357787  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:40.357851  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:40.393588  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:40.393654  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:44:40.414582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:40.414689  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:40.435522  892123 provision.go:87] duration metric: took 804.840193ms to configureAuth
	I1018 12:44:40.435591  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:40.435862  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:40.436016  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.461848  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:40.462155  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:40.462170  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:41.604038  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:41.604122  892123 machine.go:96] duration metric: took 5.864556191s to provisionDockerMachine
	I1018 12:44:41.604150  892123 start.go:293] postStartSetup for "ha-904693-m02" (driver="docker")
	I1018 12:44:41.604193  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:41.604277  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:41.604362  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.635166  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.769733  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:41.773730  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:41.773761  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:41.773774  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:41.773829  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:41.773913  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:41.773925  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:41.774028  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:41.784876  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:41.825486  892123 start.go:296] duration metric: took 221.293722ms for postStartSetup
	I1018 12:44:41.825575  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:41.825622  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.853550  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.984344  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:41.992594  892123 fix.go:56] duration metric: took 6.7479992s for fixHost
	I1018 12:44:41.992625  892123 start.go:83] releasing machines lock for "ha-904693-m02", held for 6.748059204s
	I1018 12:44:41.992720  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:42.035079  892123 out.go:179] * Found network options:
	I1018 12:44:42.038018  892123 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 12:44:42.041005  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:44:42.041052  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:44:42.041143  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:42.041192  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.041445  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:42.041506  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.075479  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.085476  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.517801  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:42.530700  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:42.530775  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:42.589914  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:42.589943  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:42.589978  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:42.590036  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:42.638987  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:42.723590  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:42.723700  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:42.768190  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:42.816075  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:43.152357  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:43.513952  892123 docker.go:234] disabling docker service ...
	I1018 12:44:43.514041  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:43.540222  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:43.562890  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:43.881442  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:44.114079  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:44.148782  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:44.181271  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:44.181354  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.192614  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:44.192694  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.213293  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.227635  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.246173  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:44.260324  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.277559  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.289335  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.301185  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:44.310422  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:44.319878  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:44.623936  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:14.836486  892123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.212505487s)
	I1018 12:46:14.836513  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:14.836567  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:14.840408  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:14.840481  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:14.844075  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:14.874919  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:14.875007  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.904606  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.937907  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:14.940843  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:14.943768  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:14.960925  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:14.964939  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:14.975051  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:14.975310  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:14.975576  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:14.993112  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:14.993392  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.3
	I1018 12:46:14.993406  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:14.993423  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:14.993545  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:14.993591  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:14.993605  892123 certs.go:257] generating profile certs ...
	I1018 12:46:14.993681  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:46:14.993743  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.385e3bc8
	I1018 12:46:14.993827  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:46:14.993839  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:14.993853  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:14.993868  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:14.993881  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:14.993896  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:46:14.993915  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:46:14.993927  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:46:14.993940  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:46:14.993992  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:14.994023  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:14.994036  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:14.994064  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:14.994090  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:14.994114  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:14.994159  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:14.994187  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:14.994202  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:14.994213  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:14.994275  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:46:15.025861  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:46:15.144065  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 12:46:15.148291  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 12:46:15.157425  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 12:46:15.161586  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 12:46:15.170498  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 12:46:15.175977  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 12:46:15.189359  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 12:46:15.193340  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1018 12:46:15.202262  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 12:46:15.206095  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 12:46:15.214849  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 12:46:15.219115  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 12:46:15.228620  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:15.247537  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:15.267038  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:15.296556  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:15.317916  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:46:15.336289  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:46:15.353950  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:46:15.373731  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:46:15.394136  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:15.413750  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:15.434057  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:15.453144  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 12:46:15.471392  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 12:46:15.487802  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 12:46:15.504613  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1018 12:46:15.518898  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 12:46:15.533487  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 12:46:15.549167  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 12:46:15.564048  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:15.570605  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:15.580039  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584075  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584195  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.625980  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:15.634627  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:15.643508  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647557  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647647  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.691919  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:15.702734  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:15.718411  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727743  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727823  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.778694  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:15.788950  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:15.793324  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:46:15.837931  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:46:15.890538  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:46:15.937757  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:46:15.981996  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:46:16.024029  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:46:16.066839  892123 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 12:46:16.067008  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:16.067038  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:46:16.067094  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:46:16.080115  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:46:16.080187  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:46:16.080261  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:16.089171  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:16.089252  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 12:46:16.097956  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:16.111585  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:16.125002  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:46:16.140735  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:16.144498  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:16.154452  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.294558  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.309039  892123 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:46:16.309487  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:16.314390  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:16.317527  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.453319  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.468140  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:16.468216  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:16.468510  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198175  892123 node_ready.go:49] node "ha-904693-m02" is "Ready"
	I1018 12:46:18.198201  892123 node_ready.go:38] duration metric: took 1.729664998s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198217  892123 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:46:18.198278  892123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:46:18.217101  892123 api_server.go:72] duration metric: took 1.908011588s to wait for apiserver process to appear ...
	I1018 12:46:18.217124  892123 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:46:18.217144  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.251260  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.251333  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:18.717735  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.729578  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.729649  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.217875  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.234644  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.234731  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.717308  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.729198  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.729276  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.217475  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.226275  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.226367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.718079  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.726851  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.727067  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.217664  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.226730  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.226816  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.717402  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.728568  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.728640  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.217240  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.225394  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.225426  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.717613  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.726996  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.727026  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.217597  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.225993  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.226022  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.717452  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.725986  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.726020  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.217619  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.225855  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.225886  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.717271  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.726978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.727011  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.217464  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.225978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.226004  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.717529  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.731613  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.731677  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.218064  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.226417  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.226450  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.718040  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.726172  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.726250  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.217881  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.226010  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.226046  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.717254  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.725448  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.725489  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.218129  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.226589  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.226622  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.717746  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.726371  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.726417  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.217874  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.227348  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.227383  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.717795  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.726023  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.726062  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.217207  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.225947  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.225992  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.717357  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.726514  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.726562  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.218170  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.226772  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.226808  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.717389  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.725579  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.725615  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.217261  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.225609  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.225686  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.717295  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.725527  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.725556  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.218209  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.226454  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.226485  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.718051  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.726332  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.726367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.217582  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.230124  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.230163  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.717418  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.725438  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.725472  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.218121  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.228207  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.228243  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.717991  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.726425  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.726455  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.217618  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.226126  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.226154  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.717772  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.726079  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.726111  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.217227  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.228703  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.228733  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.717268  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.725340  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.725369  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.217518  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.225890  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.225933  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.718202  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.726360  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.726663  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.217201  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.225234  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.225266  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.717823  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.726660  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.726690  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.217283  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.226559  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.226603  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.717962  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.744008  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.744037  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.217607  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.225920  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.225964  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.717267  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.725273  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.725300  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.217469  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.226383  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.226419  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.718060  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.726681  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.726711  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.217278  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.225508  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.225544  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.718222  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.728152  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.728184  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.217541  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.225638  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.225666  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.717265  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.725307  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.725339  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.220300  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.238786  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.238819  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.717206  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.726748  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.726780  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.217362  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.225787  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.225815  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.718214  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.727280  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.727306  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:47.217946  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:47.226669  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:46:47.227992  892123 api_server.go:141] control plane version: v1.34.1
	I1018 12:46:47.228017  892123 api_server.go:131] duration metric: took 29.010884789s to wait for apiserver health ...
	I1018 12:46:47.228027  892123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:46:47.235895  892123 system_pods.go:59] 26 kube-system pods found
	I1018 12:46:47.235980  892123 system_pods.go:61] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.236002  892123 system_pods.go:61] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.236024  892123 system_pods.go:61] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.236074  892123 system_pods.go:61] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.236094  892123 system_pods.go:61] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.236117  892123 system_pods.go:61] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.236155  892123 system_pods.go:61] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.236181  892123 system_pods.go:61] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.236201  892123 system_pods.go:61] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.236241  892123 system_pods.go:61] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.236265  892123 system_pods.go:61] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.236284  892123 system_pods.go:61] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.236324  892123 system_pods.go:61] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.236350  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.236373  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.236410  892123 system_pods.go:61] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.236436  892123 system_pods.go:61] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.236457  892123 system_pods.go:61] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.236497  892123 system_pods.go:61] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.236526  892123 system_pods.go:61] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.236548  892123 system_pods.go:61] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.236581  892123 system_pods.go:61] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.236605  892123 system_pods.go:61] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.236627  892123 system_pods.go:61] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.236663  892123 system_pods.go:61] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.236688  892123 system_pods.go:61] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.236711  892123 system_pods.go:74] duration metric: took 8.677343ms to wait for pod list to return data ...
	I1018 12:46:47.236747  892123 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:46:47.240740  892123 default_sa.go:45] found service account: "default"
	I1018 12:46:47.240819  892123 default_sa.go:55] duration metric: took 4.047411ms for default service account to be created ...
	I1018 12:46:47.240844  892123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:46:47.252062  892123 system_pods.go:86] 26 kube-system pods found
	I1018 12:46:47.252100  892123 system_pods.go:89] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.252109  892123 system_pods.go:89] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.252113  892123 system_pods.go:89] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.252143  892123 system_pods.go:89] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.252155  892123 system_pods.go:89] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.252160  892123 system_pods.go:89] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.252164  892123 system_pods.go:89] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.252174  892123 system_pods.go:89] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.252178  892123 system_pods.go:89] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.252186  892123 system_pods.go:89] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.252198  892123 system_pods.go:89] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.252219  892123 system_pods.go:89] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.252234  892123 system_pods.go:89] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.252239  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.252247  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.252252  892123 system_pods.go:89] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.252256  892123 system_pods.go:89] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.252260  892123 system_pods.go:89] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.252264  892123 system_pods.go:89] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.252277  892123 system_pods.go:89] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.252294  892123 system_pods.go:89] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.252308  892123 system_pods.go:89] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.252312  892123 system_pods.go:89] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.252318  892123 system_pods.go:89] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.252336  892123 system_pods.go:89] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.252342  892123 system_pods.go:89] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.252357  892123 system_pods.go:126] duration metric: took 11.472811ms to wait for k8s-apps to be running ...
	I1018 12:46:47.252376  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:47.252446  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:47.269517  892123 system_svc.go:56] duration metric: took 17.132227ms WaitForService to wait for kubelet
	I1018 12:46:47.269546  892123 kubeadm.go:586] duration metric: took 30.960462504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:47.269566  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:47.274201  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274235  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274248  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274253  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274257  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274296  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274304  892123 node_conditions.go:105] duration metric: took 4.713888ms to run NodePressure ...
	I1018 12:46:47.274322  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:47.274358  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:47.277881  892123 out.go:203] 
	I1018 12:46:47.280982  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:47.281113  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.284552  892123 out.go:179] * Starting "ha-904693-m04" worker node in "ha-904693" cluster
	I1018 12:46:47.288329  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:46:47.290468  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:46:47.293413  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:46:47.293456  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:46:47.293503  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:46:47.293595  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:46:47.293607  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:46:47.293757  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.314739  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:46:47.314762  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:46:47.314780  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:46:47.314805  892123 start.go:360] acquireMachinesLock for ha-904693-m04: {Name:mk97ed96515b1272cbdea992e117b8911f5b1654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:46:47.314870  892123 start.go:364] duration metric: took 45.481µs to acquireMachinesLock for "ha-904693-m04"
	I1018 12:46:47.314893  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:46:47.314902  892123 fix.go:54] fixHost starting: m04
	I1018 12:46:47.315155  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.332443  892123 fix.go:112] recreateIfNeeded on ha-904693-m04: state=Stopped err=<nil>
	W1018 12:46:47.332521  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:46:47.335757  892123 out.go:252] * Restarting existing docker container for "ha-904693-m04" ...
	I1018 12:46:47.335864  892123 cli_runner.go:164] Run: docker start ha-904693-m04
	I1018 12:46:47.662072  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.692999  892123 kic.go:430] container "ha-904693-m04" state is running.
	I1018 12:46:47.693365  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:47.716277  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.716634  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:46:47.716712  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:47.737549  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:47.737866  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:47.737883  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:46:47.738856  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:46:50.891423  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:50.891500  892123 ubuntu.go:182] provisioning hostname "ha-904693-m04"
	I1018 12:46:50.891579  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:50.911143  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:50.911556  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:50.911590  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m04 && echo "ha-904693-m04" | sudo tee /etc/hostname
	I1018 12:46:51.083384  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:51.083546  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.103177  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.103480  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.103496  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:46:51.264024  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:46:51.264123  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:46:51.264148  892123 ubuntu.go:190] setting up certificates
	I1018 12:46:51.264172  892123 provision.go:84] configureAuth start
	I1018 12:46:51.264250  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:51.283401  892123 provision.go:143] copyHostCerts
	I1018 12:46:51.283446  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283481  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:46:51.283494  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283573  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:46:51.283688  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283714  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:46:51.283724  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283763  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:46:51.283815  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283836  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:46:51.283845  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283870  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:46:51.283923  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m04 san=[127.0.0.1 192.168.49.5 ha-904693-m04 localhost minikube]
	I1018 12:46:51.487797  892123 provision.go:177] copyRemoteCerts
	I1018 12:46:51.487868  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:46:51.487911  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.510008  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:51.615718  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:46:51.615785  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:46:51.634401  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:46:51.634467  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:46:51.655136  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:46:51.655199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:46:51.677312  892123 provision.go:87] duration metric: took 413.118272ms to configureAuth
	I1018 12:46:51.677338  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:46:51.677569  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:51.677678  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.695105  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.695420  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.695442  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:46:52.007291  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:46:52.007315  892123 machine.go:96] duration metric: took 4.290661536s to provisionDockerMachine
	I1018 12:46:52.007328  892123 start.go:293] postStartSetup for "ha-904693-m04" (driver="docker")
	I1018 12:46:52.007341  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:46:52.007440  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:46:52.007488  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.034279  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.148189  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:46:52.151952  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:46:52.152034  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:46:52.152060  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:46:52.152123  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:46:52.152205  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:46:52.152217  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:46:52.152317  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:46:52.160224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:52.185280  892123 start.go:296] duration metric: took 177.935801ms for postStartSetup
	I1018 12:46:52.185367  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:46:52.185409  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.204012  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.309958  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:46:52.318024  892123 fix.go:56] duration metric: took 5.003113681s for fixHost
	I1018 12:46:52.318051  892123 start.go:83] releasing machines lock for "ha-904693-m04", held for 5.003169468s
	I1018 12:46:52.318132  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:52.338543  892123 out.go:179] * Found network options:
	I1018 12:46:52.341584  892123 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 12:46:52.344371  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344399  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344423  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344438  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:46:52.344508  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:46:52.344554  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.344831  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:46:52.344903  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.372515  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.374225  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.579686  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:46:52.584329  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:46:52.584402  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:46:52.593417  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:46:52.593443  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:46:52.593476  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:46:52.593524  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:46:52.609004  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:46:52.623230  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:46:52.623318  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:46:52.639717  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:46:52.657699  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:46:52.794706  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:46:52.921750  892123 docker.go:234] disabling docker service ...
	I1018 12:46:52.921870  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:46:52.939978  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:46:52.957529  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:46:53.104620  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:46:53.235063  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:46:53.249044  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:46:53.264364  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:46:53.264444  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.277945  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:46:53.278028  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.288323  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.297677  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.306794  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:46:53.314879  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.325157  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.333994  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.343268  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:46:53.351341  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:46:53.359207  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:53.488389  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:53.631149  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:53.631269  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:53.635894  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:53.636001  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:53.640586  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:53.680864  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:53.680981  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.722237  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.757817  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:53.760732  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:53.763576  892123 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 12:46:53.765748  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:53.783043  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:53.787170  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:53.797279  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:53.797525  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:53.797787  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:53.816361  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:53.816630  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.5
	I1018 12:46:53.816637  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:53.816653  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:53.816755  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:53.816795  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:53.816807  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:53.816820  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:53.816830  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:53.816843  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:53.816895  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:53.816925  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:53.816933  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:53.816956  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:53.816977  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:53.816997  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:53.817039  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:53.817065  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:53.817077  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.817087  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:53.817105  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:53.836940  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:53.857942  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:53.880441  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:53.899127  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:53.928293  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:53.948582  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:53.967019  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:53.973552  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:53.982588  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986756  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986822  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:54.033044  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:54.042429  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:54.051990  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056823  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056924  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.099082  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:54.107933  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:54.117094  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121498  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121603  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.164645  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:54.179721  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:54.183706  892123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:46:54.183754  892123 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1018 12:46:54.183838  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:54.183909  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:54.192639  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:54.192775  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 12:46:54.200819  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:54.215040  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:54.229836  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:54.234543  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:54.244928  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.376940  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.392818  892123 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1018 12:46:54.393235  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:54.396046  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:54.399111  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.530712  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.553448  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:54.553522  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:54.553818  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557200  892123 node_ready.go:49] node "ha-904693-m04" is "Ready"
	I1018 12:46:54.557238  892123 node_ready.go:38] duration metric: took 3.399257ms for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557252  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:54.557309  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:54.571372  892123 system_svc.go:56] duration metric: took 14.111509ms WaitForService to wait for kubelet
	I1018 12:46:54.571412  892123 kubeadm.go:586] duration metric: took 178.543905ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:54.571434  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:54.575184  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575215  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575227  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575232  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575236  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575242  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575247  892123 node_conditions.go:105] duration metric: took 3.806637ms to run NodePressure ...
	I1018 12:46:54.575260  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:54.575287  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:54.575609  892123 ssh_runner.go:195] Run: rm -f paused
	I1018 12:46:54.579787  892123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:46:54.580332  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:46:54.597506  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603509  892123 pod_ready.go:94] pod "coredns-66bc5c9577-np459" is "Ready"
	I1018 12:46:54.603539  892123 pod_ready.go:86] duration metric: took 6.000704ms for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603550  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.611441  892123 pod_ready.go:94] pod "coredns-66bc5c9577-w4mzd" is "Ready"
	I1018 12:46:54.611468  892123 pod_ready.go:86] duration metric: took 7.909713ms for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.615301  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622147  892123 pod_ready.go:94] pod "etcd-ha-904693" is "Ready"
	I1018 12:46:54.622188  892123 pod_ready.go:86] duration metric: took 6.858682ms for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622213  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628869  892123 pod_ready.go:94] pod "etcd-ha-904693-m02" is "Ready"
	I1018 12:46:54.628906  892123 pod_ready.go:86] duration metric: took 6.68035ms for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628916  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.781287  892123 request.go:683] "Waited before sending request" delay="152.209169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-904693-m03"
	I1018 12:46:54.981063  892123 request.go:683] "Waited before sending request" delay="194.309357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:54.984206  892123 pod_ready.go:99] pod "etcd-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "etcd-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:54.984230  892123 pod_ready.go:86] duration metric: took 355.308487ms for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.181697  892123 request.go:683] "Waited before sending request" delay="197.366801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 12:46:55.185514  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.380841  892123 request.go:683] "Waited before sending request" delay="195.16471ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.581533  892123 request.go:683] "Waited before sending request" delay="196.391315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:55.781523  892123 request.go:683] "Waited before sending request" delay="95.293605ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.981310  892123 request.go:683] "Waited before sending request" delay="196.367824ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.381644  892123 request.go:683] "Waited before sending request" delay="186.36368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.781281  892123 request.go:683] "Waited before sending request" delay="92.241215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.784454  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693" is "Ready"
	I1018 12:46:56.784481  892123 pod_ready.go:86] duration metric: took 1.598894155s for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.784491  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.980828  892123 request.go:683] "Waited before sending request" delay="196.248142ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m02"
	I1018 12:46:57.181477  892123 request.go:683] "Waited before sending request" delay="197.376581ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m02"
	I1018 12:46:57.184898  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693-m02" is "Ready"
	I1018 12:46:57.184987  892123 pod_ready.go:86] duration metric: took 400.485818ms for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.185012  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.381473  892123 request.go:683] "Waited before sending request" delay="196.32459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m03"
	I1018 12:46:57.581071  892123 request.go:683] "Waited before sending request" delay="196.144823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:57.583949  892123 pod_ready.go:99] pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "kube-apiserver-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:57.583972  892123 pod_ready.go:86] duration metric: took 398.952558ms for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.781459  892123 request.go:683] "Waited before sending request" delay="197.326545ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 12:46:57.785500  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.980788  892123 request.go:683] "Waited before sending request" delay="195.154281ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.181517  892123 request.go:683] "Waited before sending request" delay="197.28876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.381504  892123 request.go:683] "Waited before sending request" delay="95.288468ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.580784  892123 request.go:683] "Waited before sending request" delay="194.281533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.980851  892123 request.go:683] "Waited before sending request" delay="191.275019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:59.381533  892123 request.go:683] "Waited before sending request" delay="92.286237ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	W1018 12:46:59.792577  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:02.292675  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:04.293083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:06.791662  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:08.795381  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:11.291608  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:13.291844  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:15.792067  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:18.291597  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:20.293497  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:22.793443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:25.292520  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	I1018 12:47:26.791941  892123 pod_ready.go:94] pod "kube-controller-manager-ha-904693" is "Ready"
	I1018 12:47:26.791970  892123 pod_ready.go:86] duration metric: took 29.006442197s for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:47:26.791980  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:47:28.799636  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:31.297899  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:33.298942  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:35.299122  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:37.799274  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:39.799373  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:42.301596  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:44.799207  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:47.299820  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:49.300296  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:51.798423  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:53.799278  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:56.298648  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:58.299303  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:00.306006  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:02.799215  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:04.802074  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:07.299319  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:09.799601  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:12.299633  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:14.799487  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:17.298286  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:19.298543  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:21.299532  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:23.799455  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:25.799781  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:28.299460  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:30.798185  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:32.799335  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:35.298104  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:37.299134  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:39.299272  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:41.299448  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:43.798462  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:45.799490  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:48.299004  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:50.299216  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:52.300129  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:54.301209  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:56.798691  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:59.299033  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:01.299417  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:03.798310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:05.798466  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:08.298020  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:10.298851  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:12.299443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:14.798426  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:17.299094  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:19.299178  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:21.798879  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:24.299310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:26.798113  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:29.298413  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:31.799065  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:33.799271  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:35.803906  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:38.299064  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:40.299407  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:42.299972  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:44.798560  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:46.798758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:48.799585  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:51.299544  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:53.300291  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:55.799555  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:58.298220  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:00.308856  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:02.799995  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:05.298036  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:07.300018  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:09.799328  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:12.298707  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:14.298758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:16.798951  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:19.299158  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:21.799396  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:23.799509  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:26.298486  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:28.298553  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:30.298649  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:32.299193  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:34.800007  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:37.299243  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:39.799471  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:42.299390  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:44.798986  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:47.298083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:49.300477  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:51.799774  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:54.298353  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	I1018 12:50:54.580674  892123 pod_ready.go:86] duration metric: took 3m27.788657319s for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:50:54.580708  892123 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1018 12:50:54.580723  892123 pod_ready.go:40] duration metric: took 4m0.000906152s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:50:54.583790  892123 out.go:203] 
	W1018 12:50:54.586624  892123 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1018 12:50:54.589451  892123 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-904693 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-904693
helpers_test.go:243: (dbg) docker inspect ha-904693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e",
	        "Created": "2025-10-18T12:36:31.14853988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 892248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:44:25.97701543Z",
	            "FinishedAt": "2025-10-18T12:44:25.288916989Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e-json.log",
	        "Name": "/ha-904693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-904693:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-904693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e",
	                "LowerDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-904693",
	                "Source": "/var/lib/docker/volumes/ha-904693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-904693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-904693",
	                "name.minikube.sigs.k8s.io": "ha-904693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c50d05dd02fc73a6e1bf9086ad2446bd076fd521984307bb39ab5a499f23326",
	            "SandboxKey": "/var/run/docker/netns/9c50d05dd02f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-904693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d6:c0:3d:80:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee94edf185e561d017352654d9e728ff82b5f4b27507dd51d551497bab189810",
	                    "EndpointID": "255fc8c5c14856f51b7da7876d61e503ec6a3f85dd6b9147108386eebadf9c15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-904693",
	                        "9e9432db50a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-904693 -n ha-904693
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 logs -n 25: (1.677547752s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-904693 cp ha-904693-m03:/home/docker/cp-test.txt ha-904693-m04:/home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp testdata/cp-test.txt ha-904693-m04:/home/docker/cp-test.txt                                                             │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693-m04.txt │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693:/home/docker/cp-test_ha-904693-m04_ha-904693.txt                       │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693.txt                                                 │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m02:/home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m02 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m03:/home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m03 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ node    │ ha-904693 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:41 UTC │
	│ node    │ ha-904693 node start m02 --alsologtostderr -v 5                                                                                      │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:41 UTC │
	│ node    │ ha-904693 node list --alsologtostderr -v 5                                                                                           │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │                     │
	│ stop    │ ha-904693 stop --alsologtostderr -v 5                                                                                                │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:41 UTC │
	│ start   │ ha-904693 start --wait true --alsologtostderr -v 5                                                                                   │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:43 UTC │
	│ node    │ ha-904693 node list --alsologtostderr -v 5                                                                                           │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │                     │
	│ node    │ ha-904693 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │ 18 Oct 25 12:43 UTC │
	│ stop    │ ha-904693 stop --alsologtostderr -v 5                                                                                                │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │ 18 Oct 25 12:44 UTC │
	│ start   │ ha-904693 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:44:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:44:25.711916  892123 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:44:25.712088  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712119  892123 out.go:374] Setting ErrFile to fd 2...
	I1018 12:44:25.712138  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712423  892123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:44:25.712837  892123 out.go:368] Setting JSON to false
	I1018 12:44:25.713721  892123 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16018,"bootTime":1760775448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:44:25.713821  892123 start.go:141] virtualization:  
	I1018 12:44:25.719185  892123 out.go:179] * [ha-904693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:44:25.722230  892123 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:44:25.722359  892123 notify.go:220] Checking for updates...
	I1018 12:44:25.728356  892123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:44:25.731393  892123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:25.734246  892123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:44:25.737415  892123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:44:25.740192  892123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:44:25.743783  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:25.744347  892123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:44:25.769253  892123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:44:25.769378  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.830176  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.820847832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.830279  892123 docker.go:318] overlay module found
	I1018 12:44:25.833295  892123 out.go:179] * Using the docker driver based on existing profile
	I1018 12:44:25.836144  892123 start.go:305] selected driver: docker
	I1018 12:44:25.836180  892123 start.go:925] validating driver "docker" against &{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.836325  892123 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:44:25.836440  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.891844  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.88247637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.892307  892123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:44:25.892333  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:25.892393  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:25.892444  892123 start.go:349] cluster config:
	{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.895566  892123 out.go:179] * Starting "ha-904693" primary control-plane node in "ha-904693" cluster
	I1018 12:44:25.898242  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:25.901058  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:25.903961  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:25.904124  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:25.904158  892123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 12:44:25.904169  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:25.904245  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:25.904261  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:25.904405  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:25.923338  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:25.923361  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:25.923378  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:25.923408  892123 start.go:360] acquireMachinesLock for ha-904693: {Name:mk0b11e6cfae1fdc8dfba1eeb3a517fb42d395b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:25.923474  892123 start.go:364] duration metric: took 44.365µs to acquireMachinesLock for "ha-904693"
	I1018 12:44:25.923496  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:25.923506  892123 fix.go:54] fixHost starting: 
	I1018 12:44:25.923797  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:25.940565  892123 fix.go:112] recreateIfNeeded on ha-904693: state=Stopped err=<nil>
	W1018 12:44:25.940596  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:25.943864  892123 out.go:252] * Restarting existing docker container for "ha-904693" ...
	I1018 12:44:25.943958  892123 cli_runner.go:164] Run: docker start ha-904693
	I1018 12:44:26.194711  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:26.215813  892123 kic.go:430] container "ha-904693" state is running.
	I1018 12:44:26.216371  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:26.239035  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:26.240781  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:26.240964  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:26.264332  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:26.264643  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:26.264652  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:26.265571  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:29.415325  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.415348  892123 ubuntu.go:182] provisioning hostname "ha-904693"
	I1018 12:44:29.415411  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.433529  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.433861  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.433879  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693 && echo "ha-904693" | sudo tee /etc/hostname
	I1018 12:44:29.588755  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.588848  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.609700  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.610004  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.610025  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:29.760098  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:29.760127  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:29.760148  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:29.760157  892123 provision.go:84] configureAuth start
	I1018 12:44:29.760217  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:29.777989  892123 provision.go:143] copyHostCerts
	I1018 12:44:29.778029  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778061  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:29.778077  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778149  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:29.778226  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778242  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:29.778247  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778271  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:29.778308  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778329  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:29.778333  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778355  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:29.778399  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693 san=[127.0.0.1 192.168.49.2 ha-904693 localhost minikube]
	I1018 12:44:31.047109  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:31.047193  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:31.047278  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.066067  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.172668  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:31.172743  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 12:44:31.191530  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:31.191692  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:31.211233  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:31.211300  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:31.230446  892123 provision.go:87] duration metric: took 1.47026349s to configureAuth
	I1018 12:44:31.230476  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:31.230724  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:31.230839  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.248755  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:31.249077  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:31.249098  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:31.576103  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:31.576129  892123 machine.go:96] duration metric: took 5.335328605s to provisionDockerMachine
	I1018 12:44:31.576140  892123 start.go:293] postStartSetup for "ha-904693" (driver="docker")
	I1018 12:44:31.576162  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:31.576224  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:31.576268  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.597908  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.707679  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:31.711002  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:31.711071  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:31.711090  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:31.711155  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:31.711247  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:31.711259  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:31.711355  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:31.718886  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:31.736340  892123 start.go:296] duration metric: took 160.184199ms for postStartSetup
	I1018 12:44:31.736438  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:31.736480  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.754046  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.853280  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:31.858215  892123 fix.go:56] duration metric: took 5.934701373s for fixHost
	I1018 12:44:31.858243  892123 start.go:83] releasing machines lock for "ha-904693", held for 5.934757012s
	I1018 12:44:31.858326  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:31.875758  892123 ssh_runner.go:195] Run: cat /version.json
	I1018 12:44:31.875830  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.875893  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:31.875954  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.896371  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.899369  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:32.089885  892123 ssh_runner.go:195] Run: systemctl --version
	I1018 12:44:32.096829  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:32.132460  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:32.136865  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:32.136993  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:32.144884  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:32.144907  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:32.144959  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:32.145021  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:32.160437  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:32.173683  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:32.173774  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:32.189773  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:32.203204  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:32.313641  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:32.432880  892123 docker.go:234] disabling docker service ...
	I1018 12:44:32.432958  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:32.449965  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:32.464069  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:32.584779  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:32.701524  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:32.716906  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:32.732220  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:32.732290  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.741629  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:32.741721  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.750956  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.760523  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.769646  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:32.777805  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.786814  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.795384  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.804860  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:32.812429  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:32.820169  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:32.933627  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:44:33.073156  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:44:33.073243  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:44:33.077339  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:44:33.077414  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:44:33.081817  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:44:33.111160  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:44:33.111248  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.140441  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.172376  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:44:33.175295  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:44:33.191834  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:44:33.195889  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.206059  892123 kubeadm.go:883] updating cluster {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:44:33.206251  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:33.206309  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.242225  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.242255  892123 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:44:33.242314  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.268715  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.268738  892123 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:44:33.268746  892123 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 12:44:33.268859  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:44:33.268940  892123 ssh_runner.go:195] Run: crio config
	I1018 12:44:33.339264  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:33.339288  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:33.339305  892123 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:44:33.339328  892123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-904693 NodeName:ha-904693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:44:33.339459  892123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-904693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:44:33.339481  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:44:33.339539  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:44:33.352416  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:33.352526  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:44:33.352590  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:44:33.360442  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:44:33.360534  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 12:44:33.368315  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 12:44:33.381459  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:44:33.394655  892123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 12:44:33.407827  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:44:33.421345  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:44:33.425393  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.435521  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:33.547456  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:44:33.571606  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.2
	I1018 12:44:33.571630  892123 certs.go:195] generating shared ca certs ...
	I1018 12:44:33.571647  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:33.571882  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:44:33.572004  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:44:33.572021  892123 certs.go:257] generating profile certs ...
	I1018 12:44:33.572109  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:44:33.572141  892123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44
	I1018 12:44:33.572159  892123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1018 12:44:34.089841  892123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 ...
	I1018 12:44:34.089879  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44: {Name:mk73ee01371c8601ccdf153e68cf18fb41b0caf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090092  892123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 ...
	I1018 12:44:34.090109  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44: {Name:mkc407effae516c519c94bd817f4f88bdad85974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090201  892123 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt
	I1018 12:44:34.090356  892123 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key
	I1018 12:44:34.090505  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:44:34.090525  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:44:34.090542  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:44:34.090563  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:44:34.090582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:44:34.090598  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:44:34.090617  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:44:34.090634  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:44:34.090652  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:44:34.090706  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:44:34.090745  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:44:34.090766  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:44:34.090802  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:44:34.090831  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:44:34.090865  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:44:34.090911  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:34.090942  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.090959  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.090975  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.091691  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:44:34.111143  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:44:34.130224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:44:34.147895  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:44:34.166568  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:44:34.191542  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:44:34.218375  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:44:34.243094  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:44:34.264702  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:44:34.290199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:44:34.313998  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:44:34.341991  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:44:34.361379  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:44:34.380056  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:44:34.400140  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409637  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409718  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.514177  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:44:34.526963  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:44:34.541968  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546450  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546529  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.608344  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:44:34.616770  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:44:34.627781  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635676  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635755  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.691087  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:44:34.700436  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:44:34.704339  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:44:34.762289  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:44:34.835373  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:44:34.908492  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:44:34.968701  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:44:35.018893  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:44:35.074866  892123 kubeadm.go:400] StartCluster: {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:35.075012  892123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:44:35.075100  892123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:44:35.116413  892123 cri.go:89] found id: "f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d"
	I1018 12:44:35.116441  892123 cri.go:89] found id: "adda974732675bf5434d1d2f50dcf1a62d7e89e192480dcbb5a9ffec2ab87ea9"
	I1018 12:44:35.116447  892123 cri.go:89] found id: "10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de"
	I1018 12:44:35.116470  892123 cri.go:89] found id: "2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af"
	I1018 12:44:35.116474  892123 cri.go:89] found id: "bb134bdda02b2b1865dbf7bfd965c0d86f8c2b7ee0818669fb4f4cfd3f5f8484"
	I1018 12:44:35.116478  892123 cri.go:89] found id: ""
	I1018 12:44:35.116537  892123 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:44:35.135127  892123 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:44:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:44:35.135230  892123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:44:35.147730  892123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:44:35.147766  892123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:44:35.147824  892123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:44:35.157524  892123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:35.158025  892123 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-904693" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.158160  892123 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "ha-904693" cluster setting kubeconfig missing "ha-904693" context setting]
	I1018 12:44:35.158473  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.159101  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:44:35.159857  892123 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 12:44:35.159896  892123 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 12:44:35.159940  892123 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 12:44:35.159949  892123 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 12:44:35.159955  892123 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 12:44:35.159960  892123 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 12:44:35.160422  892123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:44:35.173010  892123 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 12:44:35.173040  892123 kubeadm.go:601] duration metric: took 25.265992ms to restartPrimaryControlPlane
	I1018 12:44:35.173050  892123 kubeadm.go:402] duration metric: took 98.194754ms to StartCluster
	I1018 12:44:35.173077  892123 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.173159  892123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.173840  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.174085  892123 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:44:35.174116  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:44:35.174143  892123 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:44:35.174720  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.180626  892123 out.go:179] * Enabled addons: 
	I1018 12:44:35.183765  892123 addons.go:514] duration metric: took 9.629337ms for enable addons: enabled=[]
	I1018 12:44:35.183834  892123 start.go:246] waiting for cluster config update ...
	I1018 12:44:35.183849  892123 start.go:255] writing updated cluster config ...
	I1018 12:44:35.186931  892123 out.go:203] 
	I1018 12:44:35.190015  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.190154  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.193614  892123 out.go:179] * Starting "ha-904693-m02" control-plane node in "ha-904693" cluster
	I1018 12:44:35.196414  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:35.199358  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:35.202336  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:35.202376  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:35.202494  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:35.202510  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:35.202646  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.202901  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:35.244427  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:35.244451  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:35.244465  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:35.244491  892123 start.go:360] acquireMachinesLock for ha-904693-m02: {Name:mk6c2f485a3713f332b20d1d9fdf103954df7ac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:35.244553  892123 start.go:364] duration metric: took 42.085µs to acquireMachinesLock for "ha-904693-m02"
	I1018 12:44:35.244578  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:35.244587  892123 fix.go:54] fixHost starting: m02
	I1018 12:44:35.244844  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.277624  892123 fix.go:112] recreateIfNeeded on ha-904693-m02: state=Stopped err=<nil>
	W1018 12:44:35.277652  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:35.280995  892123 out.go:252] * Restarting existing docker container for "ha-904693-m02" ...
	I1018 12:44:35.281088  892123 cli_runner.go:164] Run: docker start ha-904693-m02
	I1018 12:44:35.680444  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.711547  892123 kic.go:430] container "ha-904693-m02" state is running.
	I1018 12:44:35.711981  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:35.739312  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.739556  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:35.739755  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:35.771422  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:35.771751  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:35.771766  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:35.772400  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:39.052293  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.052316  892123 ubuntu.go:182] provisioning hostname "ha-904693-m02"
	I1018 12:44:39.052382  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.080876  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.081188  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.081199  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m02 && echo "ha-904693-m02" | sudo tee /etc/hostname
	I1018 12:44:39.340056  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.340143  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.373338  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.373649  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.373672  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:39.630504  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:39.630578  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:39.630612  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:39.630652  892123 provision.go:84] configureAuth start
	I1018 12:44:39.630734  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:39.675738  892123 provision.go:143] copyHostCerts
	I1018 12:44:39.675784  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675817  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:39.675825  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675904  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:39.675996  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676014  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:39.676020  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676047  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:39.676086  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676101  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:39.676105  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676126  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:39.676170  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m02 san=[127.0.0.1 192.168.49.3 ha-904693-m02 localhost minikube]
	I1018 12:44:40.218129  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:40.218244  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:40.218322  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.236440  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:40.357787  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:40.357851  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:40.393588  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:40.393654  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:44:40.414582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:40.414689  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:40.435522  892123 provision.go:87] duration metric: took 804.840193ms to configureAuth
	I1018 12:44:40.435591  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:40.435862  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:40.436016  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.461848  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:40.462155  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:40.462170  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:41.604038  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:41.604122  892123 machine.go:96] duration metric: took 5.864556191s to provisionDockerMachine
	I1018 12:44:41.604150  892123 start.go:293] postStartSetup for "ha-904693-m02" (driver="docker")
	I1018 12:44:41.604193  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:41.604277  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:41.604362  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.635166  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.769733  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:41.773730  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:41.773761  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:41.773774  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:41.773829  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:41.773913  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:41.773925  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:41.774028  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:41.784876  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:41.825486  892123 start.go:296] duration metric: took 221.293722ms for postStartSetup
	I1018 12:44:41.825575  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:41.825622  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.853550  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.984344  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:41.992594  892123 fix.go:56] duration metric: took 6.7479992s for fixHost
	I1018 12:44:41.992625  892123 start.go:83] releasing machines lock for "ha-904693-m02", held for 6.748059204s
	I1018 12:44:41.992720  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:42.035079  892123 out.go:179] * Found network options:
	I1018 12:44:42.038018  892123 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 12:44:42.041005  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:44:42.041052  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:44:42.041143  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:42.041192  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.041445  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:42.041506  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.075479  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.085476  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.517801  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:42.530700  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:42.530775  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:42.589914  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:42.589943  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:42.589978  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:42.590036  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:42.638987  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:42.723590  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:42.723700  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:42.768190  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:42.816075  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:43.152357  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:43.513952  892123 docker.go:234] disabling docker service ...
	I1018 12:44:43.514041  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:43.540222  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:43.562890  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:43.881442  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:44.114079  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:44.148782  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:44.181271  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:44.181354  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.192614  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:44.192694  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.213293  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.227635  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.246173  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:44.260324  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.277559  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.289335  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.301185  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:44.310422  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:44.319878  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:44.623936  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:14.836486  892123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.212505487s)
	I1018 12:46:14.836513  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:14.836567  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:14.840408  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:14.840481  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:14.844075  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:14.874919  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:14.875007  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.904606  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.937907  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:14.940843  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:14.943768  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:14.960925  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:14.964939  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:14.975051  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:14.975310  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:14.975576  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:14.993112  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:14.993392  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.3
	I1018 12:46:14.993406  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:14.993423  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:14.993545  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:14.993591  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:14.993605  892123 certs.go:257] generating profile certs ...
	I1018 12:46:14.993681  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:46:14.993743  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.385e3bc8
	I1018 12:46:14.993827  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:46:14.993839  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:14.993853  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:14.993868  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:14.993881  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:14.993896  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:46:14.993915  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:46:14.993927  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:46:14.993940  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:46:14.993992  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:14.994023  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:14.994036  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:14.994064  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:14.994090  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:14.994114  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:14.994159  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:14.994187  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:14.994202  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:14.994213  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:14.994275  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:46:15.025861  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:46:15.144065  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 12:46:15.148291  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 12:46:15.157425  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 12:46:15.161586  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 12:46:15.170498  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 12:46:15.175977  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 12:46:15.189359  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 12:46:15.193340  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1018 12:46:15.202262  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 12:46:15.206095  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 12:46:15.214849  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 12:46:15.219115  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 12:46:15.228620  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:15.247537  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:15.267038  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:15.296556  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:15.317916  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:46:15.336289  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:46:15.353950  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:46:15.373731  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:46:15.394136  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:15.413750  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:15.434057  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:15.453144  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 12:46:15.471392  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 12:46:15.487802  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 12:46:15.504613  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1018 12:46:15.518898  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 12:46:15.533487  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 12:46:15.549167  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 12:46:15.564048  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:15.570605  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:15.580039  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584075  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584195  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.625980  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:15.634627  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:15.643508  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647557  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647647  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.691919  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:15.702734  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:15.718411  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727743  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727823  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.778694  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:15.788950  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:15.793324  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:46:15.837931  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:46:15.890538  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:46:15.937757  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:46:15.981996  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:46:16.024029  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:46:16.066839  892123 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 12:46:16.067008  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:16.067038  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:46:16.067094  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:46:16.080115  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:46:16.080187  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:46:16.080261  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:16.089171  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:16.089252  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 12:46:16.097956  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:16.111585  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:16.125002  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:46:16.140735  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:16.144498  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:16.154452  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.294558  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.309039  892123 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:46:16.309487  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:16.314390  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:16.317527  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.453319  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.468140  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:16.468216  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:16.468510  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198175  892123 node_ready.go:49] node "ha-904693-m02" is "Ready"
	I1018 12:46:18.198201  892123 node_ready.go:38] duration metric: took 1.729664998s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198217  892123 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:46:18.198278  892123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:46:18.217101  892123 api_server.go:72] duration metric: took 1.908011588s to wait for apiserver process to appear ...
	I1018 12:46:18.217124  892123 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:46:18.217144  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.251260  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.251333  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:18.717735  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.729578  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.729649  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.217875  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.234644  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.234731  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.717308  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.729198  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.729276  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.217475  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.226275  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.226367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.718079  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.726851  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.727067  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.217664  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.226730  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.226816  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.717402  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.728568  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.728640  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.217240  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.225394  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.225426  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.717613  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.726996  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.727026  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.217597  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.225993  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.226022  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.717452  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.725986  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.726020  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.217619  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.225855  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.225886  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.717271  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.726978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.727011  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.217464  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.225978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.226004  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.717529  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.731613  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.731677  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.218064  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.226417  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.226450  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.718040  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.726172  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.726250  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.217881  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.226010  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.226046  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.717254  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.725448  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.725489  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.218129  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.226589  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.226622  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.717746  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.726371  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.726417  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.217874  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.227348  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.227383  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.717795  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.726023  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.726062  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.217207  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.225947  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.225992  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.717357  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.726514  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.726562  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.218170  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.226772  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.226808  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.717389  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.725579  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.725615  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.217261  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.225609  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.225686  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.717295  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.725527  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.725556  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.218209  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.226454  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.226485  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.718051  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.726332  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.726367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.217582  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.230124  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.230163  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.717418  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.725438  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.725472  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.218121  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.228207  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.228243  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.717991  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.726425  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.726455  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.217618  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.226126  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.226154  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.717772  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.726079  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.726111  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.217227  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.228703  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.228733  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.717268  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.725340  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.725369  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.217518  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.225890  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.225933  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.718202  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.726360  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.726663  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.217201  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.225234  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.225266  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.717823  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.726660  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.726690  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.217283  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.226559  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.226603  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.717962  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.744008  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.744037  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.217607  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.225920  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.225964  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.717267  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.725273  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.725300  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.217469  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.226383  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.226419  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.718060  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.726681  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.726711  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.217278  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.225508  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.225544  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.718222  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.728152  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.728184  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.217541  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.225638  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.225666  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.717265  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.725307  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.725339  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.220300  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.238786  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.238819  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.717206  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.726748  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.726780  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.217362  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.225787  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.225815  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.718214  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.727280  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.727306  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:47.217946  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:47.226669  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:46:47.227992  892123 api_server.go:141] control plane version: v1.34.1
	I1018 12:46:47.228017  892123 api_server.go:131] duration metric: took 29.010884789s to wait for apiserver health ...
	I1018 12:46:47.228027  892123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:46:47.235895  892123 system_pods.go:59] 26 kube-system pods found
	I1018 12:46:47.235980  892123 system_pods.go:61] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.236002  892123 system_pods.go:61] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.236024  892123 system_pods.go:61] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.236074  892123 system_pods.go:61] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.236094  892123 system_pods.go:61] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.236117  892123 system_pods.go:61] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.236155  892123 system_pods.go:61] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.236181  892123 system_pods.go:61] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.236201  892123 system_pods.go:61] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.236241  892123 system_pods.go:61] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.236265  892123 system_pods.go:61] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.236284  892123 system_pods.go:61] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.236324  892123 system_pods.go:61] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.236350  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.236373  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.236410  892123 system_pods.go:61] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.236436  892123 system_pods.go:61] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.236457  892123 system_pods.go:61] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.236497  892123 system_pods.go:61] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.236526  892123 system_pods.go:61] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.236548  892123 system_pods.go:61] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.236581  892123 system_pods.go:61] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.236605  892123 system_pods.go:61] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.236627  892123 system_pods.go:61] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.236663  892123 system_pods.go:61] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.236688  892123 system_pods.go:61] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.236711  892123 system_pods.go:74] duration metric: took 8.677343ms to wait for pod list to return data ...
	I1018 12:46:47.236747  892123 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:46:47.240740  892123 default_sa.go:45] found service account: "default"
	I1018 12:46:47.240819  892123 default_sa.go:55] duration metric: took 4.047411ms for default service account to be created ...
	I1018 12:46:47.240844  892123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:46:47.252062  892123 system_pods.go:86] 26 kube-system pods found
	I1018 12:46:47.252100  892123 system_pods.go:89] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.252109  892123 system_pods.go:89] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.252113  892123 system_pods.go:89] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.252143  892123 system_pods.go:89] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.252155  892123 system_pods.go:89] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.252160  892123 system_pods.go:89] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.252164  892123 system_pods.go:89] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.252174  892123 system_pods.go:89] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.252178  892123 system_pods.go:89] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.252186  892123 system_pods.go:89] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.252198  892123 system_pods.go:89] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.252219  892123 system_pods.go:89] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.252234  892123 system_pods.go:89] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.252239  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.252247  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.252252  892123 system_pods.go:89] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.252256  892123 system_pods.go:89] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.252260  892123 system_pods.go:89] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.252264  892123 system_pods.go:89] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.252277  892123 system_pods.go:89] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.252294  892123 system_pods.go:89] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.252308  892123 system_pods.go:89] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.252312  892123 system_pods.go:89] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.252318  892123 system_pods.go:89] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.252336  892123 system_pods.go:89] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.252342  892123 system_pods.go:89] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.252357  892123 system_pods.go:126] duration metric: took 11.472811ms to wait for k8s-apps to be running ...
	I1018 12:46:47.252376  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:47.252446  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:47.269517  892123 system_svc.go:56] duration metric: took 17.132227ms WaitForService to wait for kubelet
	I1018 12:46:47.269546  892123 kubeadm.go:586] duration metric: took 30.960462504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:47.269566  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:47.274201  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274235  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274248  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274253  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274257  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274296  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274304  892123 node_conditions.go:105] duration metric: took 4.713888ms to run NodePressure ...
	I1018 12:46:47.274322  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:47.274358  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:47.277881  892123 out.go:203] 
	I1018 12:46:47.280982  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:47.281113  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.284552  892123 out.go:179] * Starting "ha-904693-m04" worker node in "ha-904693" cluster
	I1018 12:46:47.288329  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:46:47.290468  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:46:47.293413  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:46:47.293456  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:46:47.293503  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:46:47.293595  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:46:47.293607  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:46:47.293757  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.314739  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:46:47.314762  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:46:47.314780  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:46:47.314805  892123 start.go:360] acquireMachinesLock for ha-904693-m04: {Name:mk97ed96515b1272cbdea992e117b8911f5b1654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:46:47.314870  892123 start.go:364] duration metric: took 45.481µs to acquireMachinesLock for "ha-904693-m04"
	I1018 12:46:47.314893  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:46:47.314902  892123 fix.go:54] fixHost starting: m04
	I1018 12:46:47.315155  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.332443  892123 fix.go:112] recreateIfNeeded on ha-904693-m04: state=Stopped err=<nil>
	W1018 12:46:47.332521  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:46:47.335757  892123 out.go:252] * Restarting existing docker container for "ha-904693-m04" ...
	I1018 12:46:47.335864  892123 cli_runner.go:164] Run: docker start ha-904693-m04
	I1018 12:46:47.662072  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.692999  892123 kic.go:430] container "ha-904693-m04" state is running.
	I1018 12:46:47.693365  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:47.716277  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.716634  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:46:47.716712  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:47.737549  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:47.737866  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:47.737883  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:46:47.738856  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:46:50.891423  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:50.891500  892123 ubuntu.go:182] provisioning hostname "ha-904693-m04"
	I1018 12:46:50.891579  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:50.911143  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:50.911556  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:50.911590  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m04 && echo "ha-904693-m04" | sudo tee /etc/hostname
	I1018 12:46:51.083384  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:51.083546  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.103177  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.103480  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.103496  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:46:51.264024  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:46:51.264123  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:46:51.264148  892123 ubuntu.go:190] setting up certificates
	I1018 12:46:51.264172  892123 provision.go:84] configureAuth start
	I1018 12:46:51.264250  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:51.283401  892123 provision.go:143] copyHostCerts
	I1018 12:46:51.283446  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283481  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:46:51.283494  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283573  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:46:51.283688  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283714  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:46:51.283724  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283763  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:46:51.283815  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283836  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:46:51.283845  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283870  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:46:51.283923  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m04 san=[127.0.0.1 192.168.49.5 ha-904693-m04 localhost minikube]
	I1018 12:46:51.487797  892123 provision.go:177] copyRemoteCerts
	I1018 12:46:51.487868  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:46:51.487911  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.510008  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:51.615718  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:46:51.615785  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:46:51.634401  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:46:51.634467  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:46:51.655136  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:46:51.655199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:46:51.677312  892123 provision.go:87] duration metric: took 413.118272ms to configureAuth
	I1018 12:46:51.677338  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:46:51.677569  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:51.677678  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.695105  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.695420  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.695442  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:46:52.007291  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:46:52.007315  892123 machine.go:96] duration metric: took 4.290661536s to provisionDockerMachine
	I1018 12:46:52.007328  892123 start.go:293] postStartSetup for "ha-904693-m04" (driver="docker")
	I1018 12:46:52.007341  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:46:52.007440  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:46:52.007488  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.034279  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.148189  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:46:52.151952  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:46:52.152034  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:46:52.152060  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:46:52.152123  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:46:52.152205  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:46:52.152217  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:46:52.152317  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:46:52.160224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:52.185280  892123 start.go:296] duration metric: took 177.935801ms for postStartSetup
	I1018 12:46:52.185367  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:46:52.185409  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.204012  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.309958  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:46:52.318024  892123 fix.go:56] duration metric: took 5.003113681s for fixHost
	I1018 12:46:52.318051  892123 start.go:83] releasing machines lock for "ha-904693-m04", held for 5.003169468s
	I1018 12:46:52.318132  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:52.338543  892123 out.go:179] * Found network options:
	I1018 12:46:52.341584  892123 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 12:46:52.344371  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344399  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344423  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344438  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:46:52.344508  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:46:52.344554  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.344831  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:46:52.344903  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.372515  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.374225  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.579686  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:46:52.584329  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:46:52.584402  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:46:52.593417  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:46:52.593443  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:46:52.593476  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:46:52.593524  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:46:52.609004  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:46:52.623230  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:46:52.623318  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:46:52.639717  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:46:52.657699  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:46:52.794706  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:46:52.921750  892123 docker.go:234] disabling docker service ...
	I1018 12:46:52.921870  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:46:52.939978  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:46:52.957529  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:46:53.104620  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:46:53.235063  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:46:53.249044  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:46:53.264364  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:46:53.264444  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.277945  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:46:53.278028  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.288323  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.297677  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.306794  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:46:53.314879  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.325157  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.333994  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.343268  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:46:53.351341  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:46:53.359207  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:53.488389  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:53.631149  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:53.631269  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:53.635894  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:53.636001  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:53.640586  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:53.680864  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:53.680981  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.722237  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.757817  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:53.760732  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:53.763576  892123 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 12:46:53.765748  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:53.783043  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:53.787170  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:53.797279  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:53.797525  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:53.797787  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:53.816361  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:53.816630  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.5
	I1018 12:46:53.816637  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:53.816653  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:53.816755  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:53.816795  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:53.816807  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:53.816820  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:53.816830  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:53.816843  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:53.816895  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:53.816925  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:53.816933  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:53.816956  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:53.816977  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:53.816997  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:53.817039  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:53.817065  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:53.817077  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.817087  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:53.817105  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:53.836940  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:53.857942  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:53.880441  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:53.899127  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:53.928293  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:53.948582  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:53.967019  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:53.973552  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:53.982588  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986756  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986822  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:54.033044  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:54.042429  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:54.051990  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056823  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056924  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.099082  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:54.107933  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:54.117094  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121498  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121603  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.164645  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:54.179721  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:54.183706  892123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:46:54.183754  892123 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1018 12:46:54.183838  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:54.183909  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:54.192639  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:54.192775  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 12:46:54.200819  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:54.215040  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:54.229836  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:54.234543  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:54.244928  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.376940  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.392818  892123 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1018 12:46:54.393235  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:54.396046  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:54.399111  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.530712  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.553448  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:54.553522  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:54.553818  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557200  892123 node_ready.go:49] node "ha-904693-m04" is "Ready"
	I1018 12:46:54.557238  892123 node_ready.go:38] duration metric: took 3.399257ms for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557252  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:54.557309  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:54.571372  892123 system_svc.go:56] duration metric: took 14.111509ms WaitForService to wait for kubelet
	I1018 12:46:54.571412  892123 kubeadm.go:586] duration metric: took 178.543905ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:54.571434  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:54.575184  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575215  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575227  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575232  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575236  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575242  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575247  892123 node_conditions.go:105] duration metric: took 3.806637ms to run NodePressure ...
	I1018 12:46:54.575260  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:54.575287  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:54.575609  892123 ssh_runner.go:195] Run: rm -f paused
	I1018 12:46:54.579787  892123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:46:54.580332  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:46:54.597506  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603509  892123 pod_ready.go:94] pod "coredns-66bc5c9577-np459" is "Ready"
	I1018 12:46:54.603539  892123 pod_ready.go:86] duration metric: took 6.000704ms for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603550  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.611441  892123 pod_ready.go:94] pod "coredns-66bc5c9577-w4mzd" is "Ready"
	I1018 12:46:54.611468  892123 pod_ready.go:86] duration metric: took 7.909713ms for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.615301  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622147  892123 pod_ready.go:94] pod "etcd-ha-904693" is "Ready"
	I1018 12:46:54.622188  892123 pod_ready.go:86] duration metric: took 6.858682ms for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622213  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628869  892123 pod_ready.go:94] pod "etcd-ha-904693-m02" is "Ready"
	I1018 12:46:54.628906  892123 pod_ready.go:86] duration metric: took 6.68035ms for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628916  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.781287  892123 request.go:683] "Waited before sending request" delay="152.209169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-904693-m03"
	I1018 12:46:54.981063  892123 request.go:683] "Waited before sending request" delay="194.309357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:54.984206  892123 pod_ready.go:99] pod "etcd-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "etcd-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:54.984230  892123 pod_ready.go:86] duration metric: took 355.308487ms for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.181697  892123 request.go:683] "Waited before sending request" delay="197.366801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 12:46:55.185514  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.380841  892123 request.go:683] "Waited before sending request" delay="195.16471ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.581533  892123 request.go:683] "Waited before sending request" delay="196.391315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:55.781523  892123 request.go:683] "Waited before sending request" delay="95.293605ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.981310  892123 request.go:683] "Waited before sending request" delay="196.367824ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.381644  892123 request.go:683] "Waited before sending request" delay="186.36368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.781281  892123 request.go:683] "Waited before sending request" delay="92.241215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.784454  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693" is "Ready"
	I1018 12:46:56.784481  892123 pod_ready.go:86] duration metric: took 1.598894155s for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.784491  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.980828  892123 request.go:683] "Waited before sending request" delay="196.248142ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m02"
	I1018 12:46:57.181477  892123 request.go:683] "Waited before sending request" delay="197.376581ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m02"
	I1018 12:46:57.184898  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693-m02" is "Ready"
	I1018 12:46:57.184987  892123 pod_ready.go:86] duration metric: took 400.485818ms for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.185012  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.381473  892123 request.go:683] "Waited before sending request" delay="196.32459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m03"
	I1018 12:46:57.581071  892123 request.go:683] "Waited before sending request" delay="196.144823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:57.583949  892123 pod_ready.go:99] pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "kube-apiserver-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:57.583972  892123 pod_ready.go:86] duration metric: took 398.952558ms for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.781459  892123 request.go:683] "Waited before sending request" delay="197.326545ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 12:46:57.785500  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.980788  892123 request.go:683] "Waited before sending request" delay="195.154281ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.181517  892123 request.go:683] "Waited before sending request" delay="197.28876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.381504  892123 request.go:683] "Waited before sending request" delay="95.288468ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.580784  892123 request.go:683] "Waited before sending request" delay="194.281533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.980851  892123 request.go:683] "Waited before sending request" delay="191.275019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:59.381533  892123 request.go:683] "Waited before sending request" delay="92.286237ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	W1018 12:46:59.792577  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:02.292675  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:04.293083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:06.791662  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:08.795381  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:11.291608  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:13.291844  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:15.792067  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:18.291597  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:20.293497  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:22.793443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:25.292520  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	I1018 12:47:26.791941  892123 pod_ready.go:94] pod "kube-controller-manager-ha-904693" is "Ready"
	I1018 12:47:26.791970  892123 pod_ready.go:86] duration metric: took 29.006442197s for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:47:26.791980  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:47:28.799636  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:31.297899  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:33.298942  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:35.299122  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:37.799274  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:39.799373  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:42.301596  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:44.799207  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:47.299820  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:49.300296  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:51.798423  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:53.799278  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:56.298648  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:58.299303  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:00.306006  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:02.799215  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:04.802074  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:07.299319  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:09.799601  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:12.299633  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:14.799487  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:17.298286  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:19.298543  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:21.299532  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:23.799455  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:25.799781  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:28.299460  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:30.798185  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:32.799335  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:35.298104  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:37.299134  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:39.299272  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:41.299448  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:43.798462  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:45.799490  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:48.299004  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:50.299216  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:52.300129  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:54.301209  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:56.798691  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:59.299033  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:01.299417  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:03.798310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:05.798466  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:08.298020  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:10.298851  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:12.299443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:14.798426  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:17.299094  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:19.299178  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:21.798879  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:24.299310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:26.798113  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:29.298413  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:31.799065  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:33.799271  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:35.803906  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:38.299064  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:40.299407  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:42.299972  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:44.798560  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:46.798758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:48.799585  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:51.299544  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:53.300291  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:55.799555  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:58.298220  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:00.308856  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:02.799995  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:05.298036  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:07.300018  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:09.799328  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:12.298707  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:14.298758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:16.798951  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:19.299158  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:21.799396  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:23.799509  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:26.298486  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:28.298553  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:30.298649  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:32.299193  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:34.800007  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:37.299243  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:39.799471  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:42.299390  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:44.798986  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:47.298083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:49.300477  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:51.799774  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:54.298353  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	I1018 12:50:54.580674  892123 pod_ready.go:86] duration metric: took 3m27.788657319s for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:50:54.580708  892123 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1018 12:50:54.580723  892123 pod_ready.go:40] duration metric: took 4m0.000906152s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:50:54.583790  892123 out.go:203] 
	W1018 12:50:54.586624  892123 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1018 12:50:54.589451  892123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.15248919Z" level=info msg="Removing container: 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.162574204Z" level=info msg="Error loading conmon cgroup of container 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c: cgroup deleted" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.166108461Z" level=info msg="Removed container 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.757273139Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d6ef62be-0670-480d-80ef-805d2541c64a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.75822135Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=edbacbee-34c6-44e3-8f4d-c6941ddde03a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.759324246Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=dc29f712-7c3a-4dac-a06a-164b273dd7b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.759550702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.7650266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.765739428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.786332369Z" level=info msg="Created container 6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=dc29f712-7c3a-4dac-a06a-164b273dd7b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.787077969Z" level=info msg="Starting container: 6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8" id=fda5d9b2-9dfd-4967-9d1d-f43575d0dec0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.79106357Z" level=info msg="Started container" PID=1459 containerID=6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8 description=kube-system/kube-controller-manager-ha-904693/kube-controller-manager id=fda5d9b2-9dfd-4967-9d1d-f43575d0dec0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcaeeb6053eea250fdbfb9cf232775c5e74d7fbe49740ec76a8f8660f55d7bb
	Oct 18 12:46:22 ha-904693 conmon[1457]: conmon 6b9ca29a1030f2e300fa <ninfo>: container 1459 exited with status 1
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.247328418Z" level=info msg="Removing container: 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.255943755Z" level=info msg="Error loading conmon cgroup of container 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610: cgroup deleted" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.260457493Z" level=info msg="Removed container 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.757343358Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=522be43b-97c6-4135-8419-131b53678f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.760799411Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d33a4b7e-c8b6-4953-96d1-ec05fe811ee2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.763087148Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=ca79c353-2f92-46a9-b879-eb4c49528d96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.763391996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.776323243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.77706803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.797430732Z" level=info msg="Created container d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=ca79c353-2f92-46a9-b879-eb4c49528d96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.798666134Z" level=info msg="Starting container: d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a" id=234e12c8-0841-4b87-8ee3-3a75b5d265a4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.808104346Z" level=info msg="Started container" PID=1512 containerID=d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a description=kube-system/kube-controller-manager-ha-904693/kube-controller-manager id=234e12c8-0841-4b87-8ee3-3a75b5d265a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcaeeb6053eea250fdbfb9cf232775c5e74d7fbe49740ec76a8f8660f55d7bb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	d0b92a674c67c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   7                   3dcaeeb6053ee       kube-controller-manager-ha-904693   kube-system
	6b9ca29a1030f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   6                   3dcaeeb6053ee       kube-controller-manager-ha-904693   kube-system
	e1f431489a678       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       4                   6974f2ca4c496       storage-provisioner                 kube-system
	77f72db48997f       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  3                   3f717be18b100       kube-vip-ha-904693                  kube-system
	3ed6de721b810       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   81c0a2ba3eb27       coredns-66bc5c9577-np459            kube-system
	56bb35c643a21       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   2                   1229fa54d0b21       busybox-7b57f96db7-v452k            default
	5956d42910b21       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   a43d3d54495f1       coredns-66bc5c9577-w4mzd            kube-system
	b3ff0956e2bae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       3                   6974f2ca4c496       storage-provisioner                 kube-system
	b7079b16a9b7a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               2                   d48f01f8d4f05       kindnet-z2jqf                       kube-system
	664bc261a2046       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 minutes ago       Running             kube-proxy                2                   d2c7a02dbdc37       kube-proxy-xvnxv                    kube-system
	f3e12646a28ac       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            3                   2e67607845f25       kube-apiserver-ha-904693            kube-system
	10798af55ae16       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            2                   76601f4f16313       kube-scheduler-ha-904693            kube-system
	2df8ceef3f112       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      2                   cd330999b4f8d       etcd-ha-904693                      kube-system
	bb134bdda02b2       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Exited              kube-vip                  2                   3f717be18b100       kube-vip-ha-904693                  kube-system
	
	
	==> coredns [3ed6de721b81080e2d7009286cc18bd29f76863256af50d7e4af0f831a5e0461] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39165 - 29689 "HINFO IN 1724432357811573338.8138158095689922977. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017539888s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [5956d42910b21e70d3584ad16135f23f6c36232c73ad84e364d7d969d267b3ce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58851 - 10237 "HINFO IN 6142564933790260897.8896674369146005175. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017439783s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-904693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_37_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:36:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:50:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-904693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                281bd447-f1be-4669-83e5-596eea808f91
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v452k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-np459             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-w4mzd             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-904693                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-z2jqf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-904693             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-904693    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-xvnxv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-904693             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-904693                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m11s                  kube-proxy       
	  Normal   Starting                 8m9s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-904693 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeHasSufficientPID     8m52s (x8 over 8m52s)  kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m52s (x8 over 8m52s)  kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m52s (x8 over 8m52s)  kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           8m6s                   node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeHasSufficientPID     6m23s (x8 over 6m23s)  kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 6m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	
	
	Name:               ha-904693-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_37_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:37:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:50:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:50:47 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:50:47 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:50:47 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:50:47 +0000   Sat, 18 Oct 2025 12:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-904693-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                731d6d01-e152-4180-b869-d1cbd652f7b0
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hrdj5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-904693-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-lwbfx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-904693-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-904693-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s8rqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-904693-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-904693-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m7s                   kube-proxy       
	  Normal   Starting                 7m59s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Warning  CgroupV1                 8m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m48s (x8 over 8m48s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m48s (x8 over 8m48s)  kubelet          Node ha-904693-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m48s (x8 over 8m48s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           8m6s                   node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   Starting                 6m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node ha-904693-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m19s (x8 over 6m19s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m19s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	
	
	Name:               ha-904693-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_40_18_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:40:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:50:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-904693-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5cf17c72-8409-4937-903b-03a3a82789c6
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2bmmd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 kindnet-nqql7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-25w58            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m39s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 7m18s                  kube-proxy       
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)      kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)      kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)      kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-904693-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           8m6s                   node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   Starting                 7m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     7m37s (x8 over 7m40s)  kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m37s (x8 over 7m40s)  kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m37s (x8 over 7m40s)  kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m35s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   Starting                 4m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m4s (x8 over 4m8s)    kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m4s (x8 over 4m8s)    kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m4s (x8 over 4m8s)    kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m46s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000985] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=00000000204faf8b
	[  +0.001107] FS-Cache: N-key=[10] '34323937363632323639'
	[Oct18 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 12:16] overlayfs: idmapped layers are currently not supported
	[Oct18 12:22] overlayfs: idmapped layers are currently not supported
	[Oct18 12:23] overlayfs: idmapped layers are currently not supported
	[Oct18 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000048 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001047] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=000000006094aa8a
	[  +0.001123] FS-Cache: O-key=[10] '34323938373639393330'
	[  +0.000853] FS-Cache: N-cookie c=00000049 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=000000001487bd7a
	[  +0.001121] FS-Cache: N-key=[10] '34323938373639393330'
	[Oct18 12:36] overlayfs: idmapped layers are currently not supported
	[Oct18 12:37] overlayfs: idmapped layers are currently not supported
	[Oct18 12:38] overlayfs: idmapped layers are currently not supported
	[Oct18 12:40] overlayfs: idmapped layers are currently not supported
	[Oct18 12:41] overlayfs: idmapped layers are currently not supported
	[Oct18 12:42] overlayfs: idmapped layers are currently not supported
	[  +3.761821] overlayfs: idmapped layers are currently not supported
	[ +36.492252] overlayfs: idmapped layers are currently not supported
	[Oct18 12:43] overlayfs: idmapped layers are currently not supported
	[Oct18 12:44] overlayfs: idmapped layers are currently not supported
	[  +3.556272] overlayfs: idmapped layers are currently not supported
	[Oct18 12:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af] <==
	{"level":"info","ts":"2025-10-18T12:46:18.326152Z","caller":"traceutil/trace.go:172","msg":"trace[280274257] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:2365; }","duration":"148.037419ms","start":"2025-10-18T12:46:18.178111Z","end":"2025-10-18T12:46:18.326148Z","steps":["trace[280274257] 'agreement among raft nodes before linearized reading'  (duration: 148.026195ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326175Z","caller":"traceutil/trace.go:172","msg":"trace[1603022509] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:2365; }","duration":"148.083557ms","start":"2025-10-18T12:46:18.178088Z","end":"2025-10-18T12:46:18.326172Z","steps":["trace[1603022509] 'agreement among raft nodes before linearized reading'  (duration: 148.07184ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326234Z","caller":"traceutil/trace.go:172","msg":"trace[130511930] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2365; }","duration":"148.157666ms","start":"2025-10-18T12:46:18.178071Z","end":"2025-10-18T12:46:18.326229Z","steps":["trace[130511930] 'agreement among raft nodes before linearized reading'  (duration: 148.112168ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326259Z","caller":"traceutil/trace.go:172","msg":"trace[1445912654] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2365; }","duration":"148.200399ms","start":"2025-10-18T12:46:18.178054Z","end":"2025-10-18T12:46:18.326254Z","steps":["trace[1445912654] 'agreement among raft nodes before linearized reading'  (duration: 148.188928ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326330Z","caller":"traceutil/trace.go:172","msg":"trace[1229954123] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:11; response_revision:2365; }","duration":"148.289377ms","start":"2025-10-18T12:46:18.178036Z","end":"2025-10-18T12:46:18.326326Z","steps":["trace[1229954123] 'agreement among raft nodes before linearized reading'  (duration: 148.231735ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:46:18.326351Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.328934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:46:18.344747Z","caller":"traceutil/trace.go:172","msg":"trace[734962470] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:2365; }","duration":"166.713843ms","start":"2025-10-18T12:46:18.178019Z","end":"2025-10-18T12:46:18.344733Z","steps":["trace[734962470] 'agreement among raft nodes before linearized reading'  (duration: 148.321418ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326407Z","caller":"traceutil/trace.go:172","msg":"trace[513164834] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:2365; }","duration":"148.400984ms","start":"2025-10-18T12:46:18.178002Z","end":"2025-10-18T12:46:18.326403Z","steps":["trace[513164834] 'agreement among raft nodes before linearized reading'  (duration: 148.360253ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326438Z","caller":"traceutil/trace.go:172","msg":"trace[1825915532] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2365; }","duration":"148.452734ms","start":"2025-10-18T12:46:18.177982Z","end":"2025-10-18T12:46:18.326435Z","steps":["trace[1825915532] 'agreement among raft nodes before linearized reading'  (duration: 148.439975ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326463Z","caller":"traceutil/trace.go:172","msg":"trace[2054924881] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:2365; }","duration":"149.526329ms","start":"2025-10-18T12:46:18.176933Z","end":"2025-10-18T12:46:18.326459Z","steps":["trace[2054924881] 'agreement among raft nodes before linearized reading'  (duration: 149.513963ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326514Z","caller":"traceutil/trace.go:172","msg":"trace[1418956280] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:3; response_revision:2365; }","duration":"149.59627ms","start":"2025-10-18T12:46:18.176913Z","end":"2025-10-18T12:46:18.326510Z","steps":["trace[1418956280] 'agreement among raft nodes before linearized reading'  (duration: 149.557213ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326539Z","caller":"traceutil/trace.go:172","msg":"trace[302604753] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:2365; }","duration":"149.648521ms","start":"2025-10-18T12:46:18.176885Z","end":"2025-10-18T12:46:18.326534Z","steps":["trace[302604753] 'agreement among raft nodes before linearized reading'  (duration: 149.637723ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326575Z","caller":"traceutil/trace.go:172","msg":"trace[1331174270] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:2365; }","duration":"149.692148ms","start":"2025-10-18T12:46:18.176868Z","end":"2025-10-18T12:46:18.326560Z","steps":["trace[1331174270] 'agreement among raft nodes before linearized reading'  (duration: 149.678757ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326642Z","caller":"traceutil/trace.go:172","msg":"trace[818132752] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2365; }","duration":"149.794918ms","start":"2025-10-18T12:46:18.176844Z","end":"2025-10-18T12:46:18.326639Z","steps":["trace[818132752] 'agreement among raft nodes before linearized reading'  (duration: 149.740657ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326670Z","caller":"traceutil/trace.go:172","msg":"trace[1713580724] range","detail":"{range_begin:/registry/volumeattributesclasses/; range_end:/registry/volumeattributesclasses0; response_count:0; response_revision:2365; }","duration":"149.889492ms","start":"2025-10-18T12:46:18.176775Z","end":"2025-10-18T12:46:18.326664Z","steps":["trace[1713580724] 'agreement among raft nodes before linearized reading'  (duration: 149.876634ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326715Z","caller":"traceutil/trace.go:172","msg":"trace[211047245] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:2; response_revision:2365; }","duration":"151.330336ms","start":"2025-10-18T12:46:18.175381Z","end":"2025-10-18T12:46:18.326712Z","steps":["trace[211047245] 'agreement among raft nodes before linearized reading'  (duration: 151.29754ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326875Z","caller":"traceutil/trace.go:172","msg":"trace[1405564136] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:29; response_revision:2365; }","duration":"151.511433ms","start":"2025-10-18T12:46:18.175359Z","end":"2025-10-18T12:46:18.326870Z","steps":["trace[1405564136] 'agreement among raft nodes before linearized reading'  (duration: 151.365405ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326922Z","caller":"traceutil/trace.go:172","msg":"trace[870780723] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:2365; }","duration":"151.583697ms","start":"2025-10-18T12:46:18.175334Z","end":"2025-10-18T12:46:18.326918Z","steps":["trace[870780723] 'agreement among raft nodes before linearized reading'  (duration: 151.551688ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326948Z","caller":"traceutil/trace.go:172","msg":"trace[951226190] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:2365; }","duration":"151.633584ms","start":"2025-10-18T12:46:18.175309Z","end":"2025-10-18T12:46:18.326943Z","steps":["trace[951226190] 'agreement among raft nodes before linearized reading'  (duration: 151.621252ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326972Z","caller":"traceutil/trace.go:172","msg":"trace[400054300] range","detail":"{range_begin:/registry/resourceclaimtemplates/; range_end:/registry/resourceclaimtemplates0; response_count:0; response_revision:2365; }","duration":"151.740924ms","start":"2025-10-18T12:46:18.175227Z","end":"2025-10-18T12:46:18.326968Z","steps":["trace[400054300] 'agreement among raft nodes before linearized reading'  (duration: 151.728715ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.327065Z","caller":"traceutil/trace.go:172","msg":"trace[1661544509] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:2365; }","duration":"156.709429ms","start":"2025-10-18T12:46:18.170352Z","end":"2025-10-18T12:46:18.327061Z","steps":["trace[1661544509] 'agreement among raft nodes before linearized reading'  (duration: 156.627877ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.327290Z","caller":"traceutil/trace.go:172","msg":"trace[1650378666] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:71; response_revision:2365; }","duration":"156.952393ms","start":"2025-10-18T12:46:18.170333Z","end":"2025-10-18T12:46:18.327285Z","steps":["trace[1650378666] 'agreement among raft nodes before linearized reading'  (duration: 156.741454ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.327323Z","caller":"traceutil/trace.go:172","msg":"trace[428909626] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:2365; }","duration":"157.006294ms","start":"2025-10-18T12:46:18.170312Z","end":"2025-10-18T12:46:18.327318Z","steps":["trace[428909626] 'agreement among raft nodes before linearized reading'  (duration: 156.991525ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.369419Z","caller":"traceutil/trace.go:172","msg":"trace[317085595] transaction","detail":"{read_only:false; response_revision:2366; number_of_response:1; }","duration":"119.27978ms","start":"2025-10-18T12:46:18.250127Z","end":"2025-10-18T12:46:18.369407Z","steps":["trace[317085595] 'process raft request'  (duration: 118.872342ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.392088Z","caller":"traceutil/trace.go:172","msg":"trace[1241831869] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2370; }","duration":"111.572204ms","start":"2025-10-18T12:46:18.280506Z","end":"2025-10-18T12:46:18.392078Z","steps":["trace[1241831869] 'agreement among raft nodes before linearized reading'  (duration: 111.516351ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:50:56 up  4:33,  0 user,  load average: 0.66, 1.22, 1.64
	Linux ha-904693 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7079b16a9b7a2a39fa399b6c2af14323e7571db253c3823a3927f85257d9854] <==
	I1018 12:50:15.001953       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:24.996513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:24.996644       1 main.go:301] handling current node
	I1018 12:50:24.996670       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:24.996678       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:24.996841       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:24.996854       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:34.997838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:34.997874       1 main.go:301] handling current node
	I1018 12:50:34.997890       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:34.997896       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:34.998069       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:34.998081       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:44.996386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:44.996534       1 main.go:301] handling current node
	I1018 12:50:44.996575       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:44.996620       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:44.996810       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:44.996860       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:55.001909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:55.002019       1 main.go:301] handling current node
	I1018 12:50:55.002076       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:55.002117       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:55.002341       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:55.002385       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d] <==
	{"level":"warn","ts":"2025-10-18T12:46:18.148825Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026672c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148839Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015fd0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148853Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400202ed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148867Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011f43c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148880Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002666960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148896Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd2780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148670Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002ce03c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.151438Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002174960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.151912Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015fc5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155109Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155205Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155239Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a325a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155306Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002666960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155314Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400141f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160120Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027dc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160123Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160241Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bb0f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000ed9860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1018 12:46:33.558140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:46:36.235887       1 controller.go:667] quota admission added evaluator for: endpoints
	W1018 12:46:47.238564       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1018 12:46:47.262716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:47:10.772461       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:47:11.078494       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:47:11.124245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8] <==
	I1018 12:46:09.683548       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:46:10.407864       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:46:10.407894       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:46:10.409427       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:46:10.409610       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:46:10.409861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:46:10.409969       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:46:22.428900       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a] <==
	I1018 12:47:10.692829       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E1018 12:47:30.656861       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656970       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656983       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656990       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656996       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657450       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657482       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657489       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657495       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657505       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	I1018 12:47:50.671214       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-904693-m03"
	I1018 12:47:50.721328       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-904693-m03"
	I1018 12:47:50.721365       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-j75n6"
	I1018 12:47:50.760722       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-j75n6"
	I1018 12:47:50.760993       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-904693-m03"
	I1018 12:47:50.808228       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-904693-m03"
	I1018 12:47:50.808276       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bckwd"
	I1018 12:47:50.847148       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bckwd"
	I1018 12:47:50.847260       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-904693-m03"
	I1018 12:47:50.881140       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-904693-m03"
	I1018 12:47:50.881190       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-904693-m03"
	I1018 12:47:50.922459       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-904693-m03"
	I1018 12:47:50.922494       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-904693-m03"
	I1018 12:47:50.962354       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-904693-m03"
	
	
	==> kube-proxy [664bc261a20461615c227d76978fcabbc9c19e3de0de14724a6fb0f9bbcb8676] <==
	E1018 12:45:30.503531       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	I1018 12:45:30.503572       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1018 12:45:34.448156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:34.448255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:34.448188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:34.448343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:37.516021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:37.516021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:37.516161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:37.516292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:43.916094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:43.916094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:43.916214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:43.916222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:43.916267       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:45:54.700040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:54.700039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:54.700156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:54.700208       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:45:54.700282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:46:09.964095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:46:09.964311       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:46:13.036094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:46:13.036094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:46:16.108095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-scheduler [10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de] <==
	I1018 12:44:43.483110       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:44:43.485678       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:44:43.485928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:44:43.485983       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:44:43.486026       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:44:43.495638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:44:43.497112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:44:43.500234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:44:43.500370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:44:43.500447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:44:43.500535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:44:43.500610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:44:43.500685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:44:43.500757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:44:43.500889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:44:43.500972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:44:43.501347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:44:43.502109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:44:43.501648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:44:43.501688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:44:43.501702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:44:43.502170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:44:43.501546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:44:43.502196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 12:44:44.986536       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:45:47 ha-904693 kubelet[799]: I1018 12:45:47.150859     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:45:47 ha-904693 kubelet[799]: E1018 12:45:47.151001     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:45:51 ha-904693 kubelet[799]: E1018 12:45:51.687581     799 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-ha-904693)" podUID="d3c5d3145f312260295e29de6ab47ebb" pod="kube-system/kube-controller-manager-ha-904693"
	Oct 18 12:45:52 ha-904693 kubelet[799]: E1018 12:45:52.693374     799 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-904693.186f96842d53c593  default   2360 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-904693,UID:ha-904693,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-904693 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-904693,},FirstTimestamp:2025-10-18 12:44:33 +0000 UTC,LastTimestamp:2025-10-18 12:44:33.857399662 +0000 UTC m=+0.292657332,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-904693,}"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.122804     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-904693?timeout=10s\": context deadline exceeded" interval="200ms"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.698869     799 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-904693\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-904693?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.699164     799 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Oct 18 12:45:55 ha-904693 kubelet[799]: I1018 12:45:55.781460     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:45:55 ha-904693 kubelet[799]: E1018 12:45:55.781682     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:04 ha-904693 kubelet[799]: E1018 12:46:04.324141     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-904693?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Oct 18 12:46:08 ha-904693 kubelet[799]: I1018 12:46:08.756735     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:46:14 ha-904693 kubelet[799]: E1018 12:46:14.725537     799 request.go:1196] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)"
	Oct 18 12:46:14 ha-904693 kubelet[799]: E1018 12:46:14.725613     799 controller.go:145] "Failed to ensure lease exists, will retry" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" interval="800ms"
	Oct 18 12:46:23 ha-904693 kubelet[799]: I1018 12:46:23.245175     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:46:23 ha-904693 kubelet[799]: I1018 12:46:23.245500     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:23 ha-904693 kubelet[799]: E1018 12:46:23.245643     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:25 ha-904693 kubelet[799]: I1018 12:46:25.781162     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:25 ha-904693 kubelet[799]: E1018 12:46:25.781843     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:26 ha-904693 kubelet[799]: I1018 12:46:26.573839     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:26 ha-904693 kubelet[799]: E1018 12:46:26.574012     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:39 ha-904693 kubelet[799]: I1018 12:46:39.758543     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:39 ha-904693 kubelet[799]: E1018 12:46:39.758726     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:53 ha-904693 kubelet[799]: I1018 12:46:53.756932     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:53 ha-904693 kubelet[799]: E1018 12:46:53.757548     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:47:07 ha-904693 kubelet[799]: I1018 12:47:07.756671     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-904693 -n ha-904693
helpers_test.go:269: (dbg) Run:  kubectl --context ha-904693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (392.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-904693" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-904693\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-904693\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-904693\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"reg
istry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticI
P\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-904693
helpers_test.go:243: (dbg) docker inspect ha-904693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e",
	        "Created": "2025-10-18T12:36:31.14853988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 892248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:44:25.97701543Z",
	            "FinishedAt": "2025-10-18T12:44:25.288916989Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e-json.log",
	        "Name": "/ha-904693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-904693:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-904693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e",
	                "LowerDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-904693",
	                "Source": "/var/lib/docker/volumes/ha-904693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-904693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-904693",
	                "name.minikube.sigs.k8s.io": "ha-904693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c50d05dd02fc73a6e1bf9086ad2446bd076fd521984307bb39ab5a499f23326",
	            "SandboxKey": "/var/run/docker/netns/9c50d05dd02f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-904693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d6:c0:3d:80:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee94edf185e561d017352654d9e728ff82b5f4b27507dd51d551497bab189810",
	                    "EndpointID": "255fc8c5c14856f51b7da7876d61e503ec6a3f85dd6b9147108386eebadf9c15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-904693",
	                        "9e9432db50a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-904693 -n ha-904693
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 logs -n 25: (1.976194269s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-904693 cp ha-904693-m03:/home/docker/cp-test.txt ha-904693-m04:/home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp testdata/cp-test.txt ha-904693-m04:/home/docker/cp-test.txt                                                             │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693-m04.txt │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693:/home/docker/cp-test_ha-904693-m04_ha-904693.txt                       │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693.txt                                                 │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m02:/home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m02 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m03:/home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m03 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ node    │ ha-904693 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:41 UTC │
	│ node    │ ha-904693 node start m02 --alsologtostderr -v 5                                                                                      │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:41 UTC │
	│ node    │ ha-904693 node list --alsologtostderr -v 5                                                                                           │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │                     │
	│ stop    │ ha-904693 stop --alsologtostderr -v 5                                                                                                │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:41 UTC │
	│ start   │ ha-904693 start --wait true --alsologtostderr -v 5                                                                                   │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:43 UTC │
	│ node    │ ha-904693 node list --alsologtostderr -v 5                                                                                           │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │                     │
	│ node    │ ha-904693 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │ 18 Oct 25 12:43 UTC │
	│ stop    │ ha-904693 stop --alsologtostderr -v 5                                                                                                │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │ 18 Oct 25 12:44 UTC │
	│ start   │ ha-904693 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:44:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:44:25.711916  892123 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:44:25.712088  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712119  892123 out.go:374] Setting ErrFile to fd 2...
	I1018 12:44:25.712138  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712423  892123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:44:25.712837  892123 out.go:368] Setting JSON to false
	I1018 12:44:25.713721  892123 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16018,"bootTime":1760775448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:44:25.713821  892123 start.go:141] virtualization:  
	I1018 12:44:25.719185  892123 out.go:179] * [ha-904693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:44:25.722230  892123 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:44:25.722359  892123 notify.go:220] Checking for updates...
	I1018 12:44:25.728356  892123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:44:25.731393  892123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:25.734246  892123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:44:25.737415  892123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:44:25.740192  892123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:44:25.743783  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:25.744347  892123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:44:25.769253  892123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:44:25.769378  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.830176  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.820847832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.830279  892123 docker.go:318] overlay module found
	I1018 12:44:25.833295  892123 out.go:179] * Using the docker driver based on existing profile
	I1018 12:44:25.836144  892123 start.go:305] selected driver: docker
	I1018 12:44:25.836180  892123 start.go:925] validating driver "docker" against &{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.836325  892123 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:44:25.836440  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.891844  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.88247637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.892307  892123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:44:25.892333  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:25.892393  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:25.892444  892123 start.go:349] cluster config:
	{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.895566  892123 out.go:179] * Starting "ha-904693" primary control-plane node in "ha-904693" cluster
	I1018 12:44:25.898242  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:25.901058  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:25.903961  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:25.904124  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:25.904158  892123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 12:44:25.904169  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:25.904245  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:25.904261  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:25.904405  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:25.923338  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:25.923361  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:25.923378  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:25.923408  892123 start.go:360] acquireMachinesLock for ha-904693: {Name:mk0b11e6cfae1fdc8dfba1eeb3a517fb42d395b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:25.923474  892123 start.go:364] duration metric: took 44.365µs to acquireMachinesLock for "ha-904693"
	I1018 12:44:25.923496  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:25.923506  892123 fix.go:54] fixHost starting: 
	I1018 12:44:25.923797  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:25.940565  892123 fix.go:112] recreateIfNeeded on ha-904693: state=Stopped err=<nil>
	W1018 12:44:25.940596  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:25.943864  892123 out.go:252] * Restarting existing docker container for "ha-904693" ...
	I1018 12:44:25.943958  892123 cli_runner.go:164] Run: docker start ha-904693
	I1018 12:44:26.194711  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:26.215813  892123 kic.go:430] container "ha-904693" state is running.
	I1018 12:44:26.216371  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:26.239035  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:26.240781  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:26.240964  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:26.264332  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:26.264643  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:26.264652  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:26.265571  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:29.415325  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.415348  892123 ubuntu.go:182] provisioning hostname "ha-904693"
	I1018 12:44:29.415411  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.433529  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.433861  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.433879  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693 && echo "ha-904693" | sudo tee /etc/hostname
	I1018 12:44:29.588755  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.588848  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.609700  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.610004  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.610025  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:29.760098  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:29.760127  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:29.760148  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:29.760157  892123 provision.go:84] configureAuth start
	I1018 12:44:29.760217  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:29.777989  892123 provision.go:143] copyHostCerts
	I1018 12:44:29.778029  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778061  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:29.778077  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778149  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:29.778226  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778242  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:29.778247  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778271  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:29.778308  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778329  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:29.778333  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778355  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:29.778399  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693 san=[127.0.0.1 192.168.49.2 ha-904693 localhost minikube]
	I1018 12:44:31.047109  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:31.047193  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:31.047278  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.066067  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.172668  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:31.172743  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 12:44:31.191530  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:31.191692  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:31.211233  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:31.211300  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:31.230446  892123 provision.go:87] duration metric: took 1.47026349s to configureAuth
	I1018 12:44:31.230476  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:31.230724  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:31.230839  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.248755  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:31.249077  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:31.249098  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:31.576103  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:31.576129  892123 machine.go:96] duration metric: took 5.335328605s to provisionDockerMachine
	I1018 12:44:31.576140  892123 start.go:293] postStartSetup for "ha-904693" (driver="docker")
	I1018 12:44:31.576162  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:31.576224  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:31.576268  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.597908  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.707679  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:31.711002  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:31.711071  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:31.711090  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:31.711155  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:31.711247  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:31.711259  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:31.711355  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:31.718886  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:31.736340  892123 start.go:296] duration metric: took 160.184199ms for postStartSetup
	I1018 12:44:31.736438  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:31.736480  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.754046  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.853280  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:31.858215  892123 fix.go:56] duration metric: took 5.934701373s for fixHost
	I1018 12:44:31.858243  892123 start.go:83] releasing machines lock for "ha-904693", held for 5.934757012s
	I1018 12:44:31.858326  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:31.875758  892123 ssh_runner.go:195] Run: cat /version.json
	I1018 12:44:31.875830  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.875893  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:31.875954  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.896371  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.899369  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:32.089885  892123 ssh_runner.go:195] Run: systemctl --version
	I1018 12:44:32.096829  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:32.132460  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:32.136865  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:32.136993  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:32.144884  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:32.144907  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:32.144959  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:32.145021  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:32.160437  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:32.173683  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:32.173774  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:32.189773  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:32.203204  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:32.313641  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:32.432880  892123 docker.go:234] disabling docker service ...
	I1018 12:44:32.432958  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:32.449965  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:32.464069  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:32.584779  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:32.701524  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:32.716906  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:32.732220  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:32.732290  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.741629  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:32.741721  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.750956  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.760523  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.769646  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:32.777805  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.786814  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.795384  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.804860  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:32.812429  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:32.820169  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:32.933627  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:44:33.073156  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:44:33.073243  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:44:33.077339  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:44:33.077414  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:44:33.081817  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:44:33.111160  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:44:33.111248  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.140441  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.172376  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:44:33.175295  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:44:33.191834  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:44:33.195889  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.206059  892123 kubeadm.go:883] updating cluster {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:44:33.206251  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:33.206309  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.242225  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.242255  892123 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:44:33.242314  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.268715  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.268738  892123 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:44:33.268746  892123 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 12:44:33.268859  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:44:33.268940  892123 ssh_runner.go:195] Run: crio config
	I1018 12:44:33.339264  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:33.339288  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:33.339305  892123 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:44:33.339328  892123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-904693 NodeName:ha-904693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:44:33.339459  892123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-904693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:44:33.339481  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:44:33.339539  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:44:33.352416  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:33.352526  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:44:33.352590  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:44:33.360442  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:44:33.360534  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 12:44:33.368315  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 12:44:33.381459  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:44:33.394655  892123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 12:44:33.407827  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:44:33.421345  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:44:33.425393  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.435521  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:33.547456  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:44:33.571606  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.2
	I1018 12:44:33.571630  892123 certs.go:195] generating shared ca certs ...
	I1018 12:44:33.571647  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:33.571882  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:44:33.572004  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:44:33.572021  892123 certs.go:257] generating profile certs ...
	I1018 12:44:33.572109  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:44:33.572141  892123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44
	I1018 12:44:33.572159  892123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1018 12:44:34.089841  892123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 ...
	I1018 12:44:34.089879  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44: {Name:mk73ee01371c8601ccdf153e68cf18fb41b0caf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090092  892123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 ...
	I1018 12:44:34.090109  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44: {Name:mkc407effae516c519c94bd817f4f88bdad85974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090201  892123 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt
	I1018 12:44:34.090356  892123 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key
	I1018 12:44:34.090505  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:44:34.090525  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:44:34.090542  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:44:34.090563  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:44:34.090582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:44:34.090598  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:44:34.090617  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:44:34.090634  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:44:34.090652  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:44:34.090706  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:44:34.090745  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:44:34.090766  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:44:34.090802  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:44:34.090831  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:44:34.090865  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:44:34.090911  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:34.090942  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.090959  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.090975  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.091691  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:44:34.111143  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:44:34.130224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:44:34.147895  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:44:34.166568  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:44:34.191542  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:44:34.218375  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:44:34.243094  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:44:34.264702  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:44:34.290199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:44:34.313998  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:44:34.341991  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:44:34.361379  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:44:34.380056  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:44:34.400140  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409637  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409718  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.514177  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:44:34.526963  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:44:34.541968  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546450  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546529  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.608344  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:44:34.616770  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:44:34.627781  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635676  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635755  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.691087  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:44:34.700436  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:44:34.704339  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:44:34.762289  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:44:34.835373  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:44:34.908492  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:44:34.968701  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:44:35.018893  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:44:35.074866  892123 kubeadm.go:400] StartCluster: {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:35.075012  892123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:44:35.075100  892123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:44:35.116413  892123 cri.go:89] found id: "f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d"
	I1018 12:44:35.116441  892123 cri.go:89] found id: "adda974732675bf5434d1d2f50dcf1a62d7e89e192480dcbb5a9ffec2ab87ea9"
	I1018 12:44:35.116447  892123 cri.go:89] found id: "10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de"
	I1018 12:44:35.116470  892123 cri.go:89] found id: "2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af"
	I1018 12:44:35.116474  892123 cri.go:89] found id: "bb134bdda02b2b1865dbf7bfd965c0d86f8c2b7ee0818669fb4f4cfd3f5f8484"
	I1018 12:44:35.116478  892123 cri.go:89] found id: ""
	I1018 12:44:35.116537  892123 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:44:35.135127  892123 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:44:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:44:35.135230  892123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:44:35.147730  892123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:44:35.147766  892123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:44:35.147824  892123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:44:35.157524  892123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:35.158025  892123 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-904693" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.158160  892123 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "ha-904693" cluster setting kubeconfig missing "ha-904693" context setting]
	I1018 12:44:35.158473  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.159101  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:44:35.159857  892123 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 12:44:35.159896  892123 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 12:44:35.159940  892123 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 12:44:35.159949  892123 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 12:44:35.159955  892123 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 12:44:35.159960  892123 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 12:44:35.160422  892123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:44:35.173010  892123 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 12:44:35.173040  892123 kubeadm.go:601] duration metric: took 25.265992ms to restartPrimaryControlPlane
	I1018 12:44:35.173050  892123 kubeadm.go:402] duration metric: took 98.194754ms to StartCluster
	I1018 12:44:35.173077  892123 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.173159  892123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.173840  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.174085  892123 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:44:35.174116  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:44:35.174143  892123 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:44:35.174720  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.180626  892123 out.go:179] * Enabled addons: 
	I1018 12:44:35.183765  892123 addons.go:514] duration metric: took 9.629337ms for enable addons: enabled=[]
	I1018 12:44:35.183834  892123 start.go:246] waiting for cluster config update ...
	I1018 12:44:35.183849  892123 start.go:255] writing updated cluster config ...
	I1018 12:44:35.186931  892123 out.go:203] 
	I1018 12:44:35.190015  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.190154  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.193614  892123 out.go:179] * Starting "ha-904693-m02" control-plane node in "ha-904693" cluster
	I1018 12:44:35.196414  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:35.199358  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:35.202336  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:35.202376  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:35.202494  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:35.202510  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:35.202646  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.202901  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:35.244427  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:35.244451  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:35.244465  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:35.244491  892123 start.go:360] acquireMachinesLock for ha-904693-m02: {Name:mk6c2f485a3713f332b20d1d9fdf103954df7ac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:35.244553  892123 start.go:364] duration metric: took 42.085µs to acquireMachinesLock for "ha-904693-m02"
	I1018 12:44:35.244578  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:35.244587  892123 fix.go:54] fixHost starting: m02
	I1018 12:44:35.244844  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.277624  892123 fix.go:112] recreateIfNeeded on ha-904693-m02: state=Stopped err=<nil>
	W1018 12:44:35.277652  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:35.280995  892123 out.go:252] * Restarting existing docker container for "ha-904693-m02" ...
	I1018 12:44:35.281088  892123 cli_runner.go:164] Run: docker start ha-904693-m02
	I1018 12:44:35.680444  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.711547  892123 kic.go:430] container "ha-904693-m02" state is running.
	I1018 12:44:35.711981  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:35.739312  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.739556  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:35.739755  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:35.771422  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:35.771751  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:35.771766  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:35.772400  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:39.052293  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.052316  892123 ubuntu.go:182] provisioning hostname "ha-904693-m02"
	I1018 12:44:39.052382  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.080876  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.081188  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.081199  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m02 && echo "ha-904693-m02" | sudo tee /etc/hostname
	I1018 12:44:39.340056  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.340143  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.373338  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.373649  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.373672  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:39.630504  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:39.630578  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:39.630612  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:39.630652  892123 provision.go:84] configureAuth start
	I1018 12:44:39.630734  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:39.675738  892123 provision.go:143] copyHostCerts
	I1018 12:44:39.675784  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675817  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:39.675825  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675904  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:39.675996  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676014  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:39.676020  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676047  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:39.676086  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676101  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:39.676105  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676126  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:39.676170  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m02 san=[127.0.0.1 192.168.49.3 ha-904693-m02 localhost minikube]
	I1018 12:44:40.218129  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:40.218244  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:40.218322  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.236440  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:40.357787  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:40.357851  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:40.393588  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:40.393654  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:44:40.414582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:40.414689  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:40.435522  892123 provision.go:87] duration metric: took 804.840193ms to configureAuth
	I1018 12:44:40.435591  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:40.435862  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:40.436016  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.461848  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:40.462155  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:40.462170  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:41.604038  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:41.604122  892123 machine.go:96] duration metric: took 5.864556191s to provisionDockerMachine
	I1018 12:44:41.604150  892123 start.go:293] postStartSetup for "ha-904693-m02" (driver="docker")
	I1018 12:44:41.604193  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:41.604277  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:41.604362  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.635166  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.769733  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:41.773730  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:41.773761  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:41.773774  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:41.773829  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:41.773913  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:41.773925  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:41.774028  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:41.784876  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:41.825486  892123 start.go:296] duration metric: took 221.293722ms for postStartSetup
	I1018 12:44:41.825575  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:41.825622  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.853550  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.984344  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:41.992594  892123 fix.go:56] duration metric: took 6.7479992s for fixHost
	I1018 12:44:41.992625  892123 start.go:83] releasing machines lock for "ha-904693-m02", held for 6.748059204s
	I1018 12:44:41.992720  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:42.035079  892123 out.go:179] * Found network options:
	I1018 12:44:42.038018  892123 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 12:44:42.041005  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:44:42.041052  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:44:42.041143  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:42.041192  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.041445  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:42.041506  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.075479  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.085476  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.517801  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:42.530700  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:42.530775  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:42.589914  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:42.589943  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:42.589978  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:42.590036  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:42.638987  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:42.723590  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:42.723700  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:42.768190  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:42.816075  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:43.152357  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:43.513952  892123 docker.go:234] disabling docker service ...
	I1018 12:44:43.514041  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:43.540222  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:43.562890  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:43.881442  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:44.114079  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:44.148782  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:44.181271  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:44.181354  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.192614  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:44.192694  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.213293  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.227635  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.246173  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:44.260324  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.277559  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.289335  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.301185  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:44.310422  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:44.319878  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:44.623936  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:14.836486  892123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.212505487s)
	I1018 12:46:14.836513  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:14.836567  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:14.840408  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:14.840481  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:14.844075  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:14.874919  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:14.875007  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.904606  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.937907  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:14.940843  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:14.943768  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:14.960925  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:14.964939  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:14.975051  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:14.975310  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:14.975576  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:14.993112  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:14.993392  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.3
	I1018 12:46:14.993406  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:14.993423  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:14.993545  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:14.993591  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:14.993605  892123 certs.go:257] generating profile certs ...
	I1018 12:46:14.993681  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:46:14.993743  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.385e3bc8
	I1018 12:46:14.993827  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:46:14.993839  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:14.993853  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:14.993868  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:14.993881  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:14.993896  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:46:14.993915  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:46:14.993927  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:46:14.993940  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:46:14.993992  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:14.994023  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:14.994036  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:14.994064  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:14.994090  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:14.994114  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:14.994159  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:14.994187  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:14.994202  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:14.994213  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:14.994275  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:46:15.025861  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:46:15.144065  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 12:46:15.148291  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 12:46:15.157425  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 12:46:15.161586  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 12:46:15.170498  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 12:46:15.175977  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 12:46:15.189359  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 12:46:15.193340  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1018 12:46:15.202262  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 12:46:15.206095  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 12:46:15.214849  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 12:46:15.219115  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 12:46:15.228620  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:15.247537  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:15.267038  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:15.296556  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:15.317916  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:46:15.336289  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:46:15.353950  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:46:15.373731  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:46:15.394136  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:15.413750  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:15.434057  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:15.453144  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 12:46:15.471392  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 12:46:15.487802  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 12:46:15.504613  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1018 12:46:15.518898  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 12:46:15.533487  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 12:46:15.549167  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 12:46:15.564048  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:15.570605  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:15.580039  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584075  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584195  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.625980  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:15.634627  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:15.643508  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647557  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647647  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.691919  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:15.702734  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:15.718411  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727743  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727823  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.778694  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:15.788950  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:15.793324  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:46:15.837931  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:46:15.890538  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:46:15.937757  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:46:15.981996  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:46:16.024029  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:46:16.066839  892123 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 12:46:16.067008  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:16.067038  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:46:16.067094  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:46:16.080115  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:46:16.080187  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:46:16.080261  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:16.089171  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:16.089252  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 12:46:16.097956  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:16.111585  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:16.125002  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:46:16.140735  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:16.144498  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:16.154452  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.294558  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.309039  892123 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:46:16.309487  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:16.314390  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:16.317527  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.453319  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.468140  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:16.468216  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:16.468510  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198175  892123 node_ready.go:49] node "ha-904693-m02" is "Ready"
	I1018 12:46:18.198201  892123 node_ready.go:38] duration metric: took 1.729664998s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198217  892123 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:46:18.198278  892123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:46:18.217101  892123 api_server.go:72] duration metric: took 1.908011588s to wait for apiserver process to appear ...
	I1018 12:46:18.217124  892123 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:46:18.217144  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.251260  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.251333  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:18.717735  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.729578  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.729649  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.217875  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.234644  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.234731  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.717308  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.729198  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.729276  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.217475  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.226275  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.226367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.718079  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.726851  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.727067  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.217664  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.226730  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.226816  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.717402  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.728568  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.728640  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.217240  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.225394  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.225426  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.717613  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.726996  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.727026  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.217597  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.225993  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.226022  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.717452  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.725986  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.726020  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.217619  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.225855  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.225886  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.717271  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.726978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.727011  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.217464  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.225978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.226004  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.717529  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.731613  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.731677  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.218064  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.226417  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.226450  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.718040  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.726172  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.726250  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.217881  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.226010  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.226046  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.717254  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.725448  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.725489  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.218129  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.226589  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.226622  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.717746  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.726371  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.726417  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.217874  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.227348  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.227383  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.717795  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.726023  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.726062  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.217207  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.225947  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.225992  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.717357  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.726514  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.726562  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.218170  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.226772  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.226808  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.717389  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.725579  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.725615  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.217261  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.225609  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.225686  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.717295  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.725527  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.725556  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.218209  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.226454  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.226485  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.718051  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.726332  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.726367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.217582  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.230124  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.230163  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.717418  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.725438  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.725472  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.218121  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.228207  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.228243  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.717991  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.726425  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.726455  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.217618  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.226126  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.226154  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.717772  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.726079  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.726111  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.217227  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.228703  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.228733  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.717268  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.725340  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.725369  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.217518  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.225890  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.225933  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.718202  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.726360  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.726663  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.217201  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.225234  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.225266  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.717823  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.726660  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.726690  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.217283  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.226559  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.226603  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.717962  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.744008  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.744037  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.217607  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.225920  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.225964  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.717267  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.725273  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.725300  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.217469  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.226383  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.226419  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.718060  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.726681  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.726711  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.217278  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.225508  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.225544  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.718222  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.728152  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.728184  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.217541  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.225638  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.225666  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.717265  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.725307  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.725339  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.220300  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.238786  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.238819  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.717206  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.726748  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.726780  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.217362  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.225787  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.225815  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.718214  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.727280  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.727306  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:47.217946  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:47.226669  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:46:47.227992  892123 api_server.go:141] control plane version: v1.34.1
	I1018 12:46:47.228017  892123 api_server.go:131] duration metric: took 29.010884789s to wait for apiserver health ...
	I1018 12:46:47.228027  892123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:46:47.235895  892123 system_pods.go:59] 26 kube-system pods found
	I1018 12:46:47.235980  892123 system_pods.go:61] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.236002  892123 system_pods.go:61] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.236024  892123 system_pods.go:61] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.236074  892123 system_pods.go:61] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.236094  892123 system_pods.go:61] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.236117  892123 system_pods.go:61] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.236155  892123 system_pods.go:61] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.236181  892123 system_pods.go:61] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.236201  892123 system_pods.go:61] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.236241  892123 system_pods.go:61] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.236265  892123 system_pods.go:61] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.236284  892123 system_pods.go:61] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.236324  892123 system_pods.go:61] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.236350  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.236373  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.236410  892123 system_pods.go:61] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.236436  892123 system_pods.go:61] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.236457  892123 system_pods.go:61] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.236497  892123 system_pods.go:61] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.236526  892123 system_pods.go:61] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.236548  892123 system_pods.go:61] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.236581  892123 system_pods.go:61] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.236605  892123 system_pods.go:61] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.236627  892123 system_pods.go:61] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.236663  892123 system_pods.go:61] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.236688  892123 system_pods.go:61] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.236711  892123 system_pods.go:74] duration metric: took 8.677343ms to wait for pod list to return data ...
	I1018 12:46:47.236747  892123 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:46:47.240740  892123 default_sa.go:45] found service account: "default"
	I1018 12:46:47.240819  892123 default_sa.go:55] duration metric: took 4.047411ms for default service account to be created ...
	I1018 12:46:47.240844  892123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:46:47.252062  892123 system_pods.go:86] 26 kube-system pods found
	I1018 12:46:47.252100  892123 system_pods.go:89] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.252109  892123 system_pods.go:89] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.252113  892123 system_pods.go:89] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.252143  892123 system_pods.go:89] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.252155  892123 system_pods.go:89] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.252160  892123 system_pods.go:89] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.252164  892123 system_pods.go:89] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.252174  892123 system_pods.go:89] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.252178  892123 system_pods.go:89] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.252186  892123 system_pods.go:89] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.252198  892123 system_pods.go:89] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.252219  892123 system_pods.go:89] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.252234  892123 system_pods.go:89] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.252239  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.252247  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.252252  892123 system_pods.go:89] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.252256  892123 system_pods.go:89] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.252260  892123 system_pods.go:89] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.252264  892123 system_pods.go:89] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.252277  892123 system_pods.go:89] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.252294  892123 system_pods.go:89] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.252308  892123 system_pods.go:89] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.252312  892123 system_pods.go:89] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.252318  892123 system_pods.go:89] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.252336  892123 system_pods.go:89] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.252342  892123 system_pods.go:89] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.252357  892123 system_pods.go:126] duration metric: took 11.472811ms to wait for k8s-apps to be running ...
	I1018 12:46:47.252376  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:47.252446  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:47.269517  892123 system_svc.go:56] duration metric: took 17.132227ms WaitForService to wait for kubelet
	I1018 12:46:47.269546  892123 kubeadm.go:586] duration metric: took 30.960462504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:47.269566  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:47.274201  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274235  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274248  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274253  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274257  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274296  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274304  892123 node_conditions.go:105] duration metric: took 4.713888ms to run NodePressure ...
	I1018 12:46:47.274322  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:47.274358  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:47.277881  892123 out.go:203] 
	I1018 12:46:47.280982  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:47.281113  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.284552  892123 out.go:179] * Starting "ha-904693-m04" worker node in "ha-904693" cluster
	I1018 12:46:47.288329  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:46:47.290468  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:46:47.293413  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:46:47.293456  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:46:47.293503  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:46:47.293595  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:46:47.293607  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:46:47.293757  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.314739  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:46:47.314762  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:46:47.314780  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:46:47.314805  892123 start.go:360] acquireMachinesLock for ha-904693-m04: {Name:mk97ed96515b1272cbdea992e117b8911f5b1654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:46:47.314870  892123 start.go:364] duration metric: took 45.481µs to acquireMachinesLock for "ha-904693-m04"
	I1018 12:46:47.314893  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:46:47.314902  892123 fix.go:54] fixHost starting: m04
	I1018 12:46:47.315155  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.332443  892123 fix.go:112] recreateIfNeeded on ha-904693-m04: state=Stopped err=<nil>
	W1018 12:46:47.332521  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:46:47.335757  892123 out.go:252] * Restarting existing docker container for "ha-904693-m04" ...
	I1018 12:46:47.335864  892123 cli_runner.go:164] Run: docker start ha-904693-m04
	I1018 12:46:47.662072  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.692999  892123 kic.go:430] container "ha-904693-m04" state is running.
	I1018 12:46:47.693365  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:47.716277  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.716634  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:46:47.716712  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:47.737549  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:47.737866  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:47.737883  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:46:47.738856  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:46:50.891423  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:50.891500  892123 ubuntu.go:182] provisioning hostname "ha-904693-m04"
	I1018 12:46:50.891579  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:50.911143  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:50.911556  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:50.911590  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m04 && echo "ha-904693-m04" | sudo tee /etc/hostname
	I1018 12:46:51.083384  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:51.083546  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.103177  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.103480  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.103496  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:46:51.264024  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:46:51.264123  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:46:51.264148  892123 ubuntu.go:190] setting up certificates
	I1018 12:46:51.264172  892123 provision.go:84] configureAuth start
	I1018 12:46:51.264250  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:51.283401  892123 provision.go:143] copyHostCerts
	I1018 12:46:51.283446  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283481  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:46:51.283494  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283573  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:46:51.283688  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283714  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:46:51.283724  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283763  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:46:51.283815  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283836  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:46:51.283845  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283870  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:46:51.283923  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m04 san=[127.0.0.1 192.168.49.5 ha-904693-m04 localhost minikube]
	I1018 12:46:51.487797  892123 provision.go:177] copyRemoteCerts
	I1018 12:46:51.487868  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:46:51.487911  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.510008  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:51.615718  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:46:51.615785  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:46:51.634401  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:46:51.634467  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:46:51.655136  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:46:51.655199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:46:51.677312  892123 provision.go:87] duration metric: took 413.118272ms to configureAuth
	I1018 12:46:51.677338  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:46:51.677569  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:51.677678  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.695105  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.695420  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.695442  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:46:52.007291  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:46:52.007315  892123 machine.go:96] duration metric: took 4.290661536s to provisionDockerMachine
	I1018 12:46:52.007328  892123 start.go:293] postStartSetup for "ha-904693-m04" (driver="docker")
	I1018 12:46:52.007341  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:46:52.007440  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:46:52.007488  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.034279  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.148189  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:46:52.151952  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:46:52.152034  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:46:52.152060  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:46:52.152123  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:46:52.152205  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:46:52.152217  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:46:52.152317  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:46:52.160224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:52.185280  892123 start.go:296] duration metric: took 177.935801ms for postStartSetup
	I1018 12:46:52.185367  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:46:52.185409  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.204012  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.309958  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:46:52.318024  892123 fix.go:56] duration metric: took 5.003113681s for fixHost
	I1018 12:46:52.318051  892123 start.go:83] releasing machines lock for "ha-904693-m04", held for 5.003169468s
	I1018 12:46:52.318132  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:52.338543  892123 out.go:179] * Found network options:
	I1018 12:46:52.341584  892123 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 12:46:52.344371  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344399  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344423  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344438  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:46:52.344508  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:46:52.344554  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.344831  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:46:52.344903  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.372515  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.374225  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.579686  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:46:52.584329  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:46:52.584402  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:46:52.593417  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:46:52.593443  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:46:52.593476  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:46:52.593524  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:46:52.609004  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:46:52.623230  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:46:52.623318  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:46:52.639717  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:46:52.657699  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:46:52.794706  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:46:52.921750  892123 docker.go:234] disabling docker service ...
	I1018 12:46:52.921870  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:46:52.939978  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:46:52.957529  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:46:53.104620  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:46:53.235063  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:46:53.249044  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:46:53.264364  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:46:53.264444  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.277945  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:46:53.278028  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.288323  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.297677  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.306794  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:46:53.314879  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.325157  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.333994  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.343268  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:46:53.351341  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:46:53.359207  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:53.488389  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:53.631149  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:53.631269  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:53.635894  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:53.636001  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:53.640586  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:53.680864  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:53.680981  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.722237  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.757817  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:53.760732  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:53.763576  892123 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 12:46:53.765748  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:53.783043  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:53.787170  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:53.797279  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:53.797525  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:53.797787  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:53.816361  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:53.816630  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.5
	I1018 12:46:53.816637  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:53.816653  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:53.816755  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:53.816795  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:53.816807  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:53.816820  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:53.816830  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:53.816843  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:53.816895  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:53.816925  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:53.816933  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:53.816956  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:53.816977  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:53.816997  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:53.817039  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:53.817065  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:53.817077  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.817087  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:53.817105  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:53.836940  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:53.857942  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:53.880441  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:53.899127  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:53.928293  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:53.948582  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:53.967019  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:53.973552  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:53.982588  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986756  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986822  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:54.033044  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:54.042429  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:54.051990  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056823  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056924  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.099082  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:54.107933  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:54.117094  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121498  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121603  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.164645  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:54.179721  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:54.183706  892123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:46:54.183754  892123 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1018 12:46:54.183838  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:54.183909  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:54.192639  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:54.192775  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 12:46:54.200819  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:54.215040  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:54.229836  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:54.234543  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:54.244928  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.376940  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.392818  892123 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1018 12:46:54.393235  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:54.396046  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:54.399111  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.530712  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.553448  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:54.553522  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:54.553818  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557200  892123 node_ready.go:49] node "ha-904693-m04" is "Ready"
	I1018 12:46:54.557238  892123 node_ready.go:38] duration metric: took 3.399257ms for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557252  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:54.557309  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:54.571372  892123 system_svc.go:56] duration metric: took 14.111509ms WaitForService to wait for kubelet
	I1018 12:46:54.571412  892123 kubeadm.go:586] duration metric: took 178.543905ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:54.571434  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:54.575184  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575215  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575227  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575232  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575236  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575242  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575247  892123 node_conditions.go:105] duration metric: took 3.806637ms to run NodePressure ...
	I1018 12:46:54.575260  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:54.575287  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:54.575609  892123 ssh_runner.go:195] Run: rm -f paused
	I1018 12:46:54.579787  892123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:46:54.580332  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:46:54.597506  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603509  892123 pod_ready.go:94] pod "coredns-66bc5c9577-np459" is "Ready"
	I1018 12:46:54.603539  892123 pod_ready.go:86] duration metric: took 6.000704ms for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603550  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.611441  892123 pod_ready.go:94] pod "coredns-66bc5c9577-w4mzd" is "Ready"
	I1018 12:46:54.611468  892123 pod_ready.go:86] duration metric: took 7.909713ms for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.615301  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622147  892123 pod_ready.go:94] pod "etcd-ha-904693" is "Ready"
	I1018 12:46:54.622188  892123 pod_ready.go:86] duration metric: took 6.858682ms for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622213  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628869  892123 pod_ready.go:94] pod "etcd-ha-904693-m02" is "Ready"
	I1018 12:46:54.628906  892123 pod_ready.go:86] duration metric: took 6.68035ms for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628916  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.781287  892123 request.go:683] "Waited before sending request" delay="152.209169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-904693-m03"
	I1018 12:46:54.981063  892123 request.go:683] "Waited before sending request" delay="194.309357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:54.984206  892123 pod_ready.go:99] pod "etcd-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "etcd-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:54.984230  892123 pod_ready.go:86] duration metric: took 355.308487ms for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.181697  892123 request.go:683] "Waited before sending request" delay="197.366801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 12:46:55.185514  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.380841  892123 request.go:683] "Waited before sending request" delay="195.16471ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.581533  892123 request.go:683] "Waited before sending request" delay="196.391315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:55.781523  892123 request.go:683] "Waited before sending request" delay="95.293605ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.981310  892123 request.go:683] "Waited before sending request" delay="196.367824ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.381644  892123 request.go:683] "Waited before sending request" delay="186.36368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.781281  892123 request.go:683] "Waited before sending request" delay="92.241215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.784454  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693" is "Ready"
	I1018 12:46:56.784481  892123 pod_ready.go:86] duration metric: took 1.598894155s for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.784491  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.980828  892123 request.go:683] "Waited before sending request" delay="196.248142ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m02"
	I1018 12:46:57.181477  892123 request.go:683] "Waited before sending request" delay="197.376581ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m02"
	I1018 12:46:57.184898  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693-m02" is "Ready"
	I1018 12:46:57.184987  892123 pod_ready.go:86] duration metric: took 400.485818ms for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.185012  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.381473  892123 request.go:683] "Waited before sending request" delay="196.32459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m03"
	I1018 12:46:57.581071  892123 request.go:683] "Waited before sending request" delay="196.144823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:57.583949  892123 pod_ready.go:99] pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "kube-apiserver-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:57.583972  892123 pod_ready.go:86] duration metric: took 398.952558ms for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.781459  892123 request.go:683] "Waited before sending request" delay="197.326545ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 12:46:57.785500  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.980788  892123 request.go:683] "Waited before sending request" delay="195.154281ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.181517  892123 request.go:683] "Waited before sending request" delay="197.28876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.381504  892123 request.go:683] "Waited before sending request" delay="95.288468ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.580784  892123 request.go:683] "Waited before sending request" delay="194.281533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.980851  892123 request.go:683] "Waited before sending request" delay="191.275019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:59.381533  892123 request.go:683] "Waited before sending request" delay="92.286237ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	W1018 12:46:59.792577  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:02.292675  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:04.293083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:06.791662  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:08.795381  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:11.291608  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:13.291844  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:15.792067  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:18.291597  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:20.293497  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:22.793443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:25.292520  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	I1018 12:47:26.791941  892123 pod_ready.go:94] pod "kube-controller-manager-ha-904693" is "Ready"
	I1018 12:47:26.791970  892123 pod_ready.go:86] duration metric: took 29.006442197s for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:47:26.791980  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:47:28.799636  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:31.297899  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:33.298942  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:35.299122  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:37.799274  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:39.799373  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:42.301596  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:44.799207  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:47.299820  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:49.300296  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:51.798423  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:53.799278  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:56.298648  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:58.299303  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:00.306006  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:02.799215  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:04.802074  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:07.299319  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:09.799601  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:12.299633  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:14.799487  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:17.298286  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:19.298543  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:21.299532  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:23.799455  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:25.799781  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:28.299460  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:30.798185  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:32.799335  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:35.298104  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:37.299134  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:39.299272  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:41.299448  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:43.798462  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:45.799490  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:48.299004  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:50.299216  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:52.300129  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:54.301209  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:56.798691  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:59.299033  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:01.299417  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:03.798310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:05.798466  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:08.298020  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:10.298851  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:12.299443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:14.798426  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:17.299094  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:19.299178  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:21.798879  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:24.299310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:26.798113  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:29.298413  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:31.799065  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:33.799271  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:35.803906  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:38.299064  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:40.299407  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:42.299972  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:44.798560  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:46.798758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:48.799585  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:51.299544  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:53.300291  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:55.799555  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:58.298220  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:00.308856  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:02.799995  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:05.298036  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:07.300018  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:09.799328  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:12.298707  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:14.298758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:16.798951  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:19.299158  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:21.799396  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:23.799509  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:26.298486  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:28.298553  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:30.298649  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:32.299193  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:34.800007  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:37.299243  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:39.799471  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:42.299390  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:44.798986  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:47.298083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:49.300477  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:51.799774  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:54.298353  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	I1018 12:50:54.580674  892123 pod_ready.go:86] duration metric: took 3m27.788657319s for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:50:54.580708  892123 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1018 12:50:54.580723  892123 pod_ready.go:40] duration metric: took 4m0.000906152s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:50:54.583790  892123 out.go:203] 
	W1018 12:50:54.586624  892123 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1018 12:50:54.589451  892123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.15248919Z" level=info msg="Removing container: 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.162574204Z" level=info msg="Error loading conmon cgroup of container 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c: cgroup deleted" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.166108461Z" level=info msg="Removed container 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.757273139Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d6ef62be-0670-480d-80ef-805d2541c64a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.75822135Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=edbacbee-34c6-44e3-8f4d-c6941ddde03a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.759324246Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=dc29f712-7c3a-4dac-a06a-164b273dd7b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.759550702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.7650266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.765739428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.786332369Z" level=info msg="Created container 6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=dc29f712-7c3a-4dac-a06a-164b273dd7b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.787077969Z" level=info msg="Starting container: 6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8" id=fda5d9b2-9dfd-4967-9d1d-f43575d0dec0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.79106357Z" level=info msg="Started container" PID=1459 containerID=6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8 description=kube-system/kube-controller-manager-ha-904693/kube-controller-manager id=fda5d9b2-9dfd-4967-9d1d-f43575d0dec0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcaeeb6053eea250fdbfb9cf232775c5e74d7fbe49740ec76a8f8660f55d7bb
	Oct 18 12:46:22 ha-904693 conmon[1457]: conmon 6b9ca29a1030f2e300fa <ninfo>: container 1459 exited with status 1
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.247328418Z" level=info msg="Removing container: 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.255943755Z" level=info msg="Error loading conmon cgroup of container 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610: cgroup deleted" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.260457493Z" level=info msg="Removed container 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.757343358Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=522be43b-97c6-4135-8419-131b53678f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.760799411Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d33a4b7e-c8b6-4953-96d1-ec05fe811ee2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.763087148Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=ca79c353-2f92-46a9-b879-eb4c49528d96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.763391996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.776323243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.77706803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.797430732Z" level=info msg="Created container d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=ca79c353-2f92-46a9-b879-eb4c49528d96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.798666134Z" level=info msg="Starting container: d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a" id=234e12c8-0841-4b87-8ee3-3a75b5d265a4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.808104346Z" level=info msg="Started container" PID=1512 containerID=d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a description=kube-system/kube-controller-manager-ha-904693/kube-controller-manager id=234e12c8-0841-4b87-8ee3-3a75b5d265a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcaeeb6053eea250fdbfb9cf232775c5e74d7fbe49740ec76a8f8660f55d7bb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	d0b92a674c67c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Running             kube-controller-manager   7                   3dcaeeb6053ee       kube-controller-manager-ha-904693   kube-system
	6b9ca29a1030f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   4 minutes ago       Exited              kube-controller-manager   6                   3dcaeeb6053ee       kube-controller-manager-ha-904693   kube-system
	e1f431489a678       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       4                   6974f2ca4c496       storage-provisioner                 kube-system
	77f72db48997f       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Running             kube-vip                  3                   3f717be18b100       kube-vip-ha-904693                  kube-system
	3ed6de721b810       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   81c0a2ba3eb27       coredns-66bc5c9577-np459            kube-system
	56bb35c643a21       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   6 minutes ago       Running             busybox                   2                   1229fa54d0b21       busybox-7b57f96db7-v452k            default
	5956d42910b21       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 minutes ago       Running             coredns                   2                   a43d3d54495f1       coredns-66bc5c9577-w4mzd            kube-system
	b3ff0956e2bae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Exited              storage-provisioner       3                   6974f2ca4c496       storage-provisioner                 kube-system
	b7079b16a9b7a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 minutes ago       Running             kindnet-cni               2                   d48f01f8d4f05       kindnet-z2jqf                       kube-system
	664bc261a2046       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 minutes ago       Running             kube-proxy                2                   d2c7a02dbdc37       kube-proxy-xvnxv                    kube-system
	f3e12646a28ac       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            3                   2e67607845f25       kube-apiserver-ha-904693            kube-system
	10798af55ae16       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   6 minutes ago       Running             kube-scheduler            2                   76601f4f16313       kube-scheduler-ha-904693            kube-system
	2df8ceef3f112       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   6 minutes ago       Running             etcd                      2                   cd330999b4f8d       etcd-ha-904693                      kube-system
	bb134bdda02b2       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   6 minutes ago       Exited              kube-vip                  2                   3f717be18b100       kube-vip-ha-904693                  kube-system
	
	
	==> coredns [3ed6de721b81080e2d7009286cc18bd29f76863256af50d7e4af0f831a5e0461] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39165 - 29689 "HINFO IN 1724432357811573338.8138158095689922977. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017539888s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [5956d42910b21e70d3584ad16135f23f6c36232c73ad84e364d7d969d267b3ce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58851 - 10237 "HINFO IN 6142564933790260897.8896674369146005175. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017439783s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-904693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_37_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:36:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:50:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:49:39 +0000   Sat, 18 Oct 2025 12:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-904693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                281bd447-f1be-4669-83e5-596eea808f91
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v452k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-np459             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-w4mzd             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-904693                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-z2jqf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-904693             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-904693    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-xvnxv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-904693             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-904693                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m15s                  kube-proxy       
	  Normal   Starting                 8m13s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-904693 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeHasSufficientPID     8m56s (x8 over 8m56s)  kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m56s (x8 over 8m56s)  kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m56s (x8 over 8m56s)  kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m14s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeHasSufficientPID     6m27s (x8 over 6m27s)  kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 6m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m27s (x8 over 6m27s)  kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m27s (x8 over 6m27s)  kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m50s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	
	
	Name:               ha-904693-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_37_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:37:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:50:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:50:57 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:50:57 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:50:57 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:50:57 +0000   Sat, 18 Oct 2025 12:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-904693-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                731d6d01-e152-4180-b869-d1cbd652f7b0
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hrdj5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-904693-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-lwbfx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-904693-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-904693-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s8rqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-904693-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-904693-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   Starting                 8m4s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Warning  CgroupV1                 8m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m52s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m52s (x8 over 8m52s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m52s (x8 over 8m52s)  kubelet          Node ha-904693-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m52s (x8 over 8m52s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m14s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   Starting                 6m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node ha-904693-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m23s (x8 over 6m23s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        5m23s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3m50s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	
	
	Name:               ha-904693-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_40_18_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:40:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:50:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:49:04 +0000   Sat, 18 Oct 2025 12:40:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-904693-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5cf17c72-8409-4937-903b-03a3a82789c6
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2bmmd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m21s
	  kube-system                 kindnet-nqql7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-25w58            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m43s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 7m22s                  kube-proxy       
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)      kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)      kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)      kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-904693-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m14s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           8m10s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   Starting                 7m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     7m41s (x8 over 7m44s)  kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m41s (x8 over 7m44s)  kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m41s (x8 over 7m44s)  kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   Starting                 4m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m8s (x8 over 4m12s)   kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m8s (x8 over 4m12s)   kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m8s (x8 over 4m12s)   kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m50s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000985] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=00000000204faf8b
	[  +0.001107] FS-Cache: N-key=[10] '34323937363632323639'
	[Oct18 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 12:16] overlayfs: idmapped layers are currently not supported
	[Oct18 12:22] overlayfs: idmapped layers are currently not supported
	[Oct18 12:23] overlayfs: idmapped layers are currently not supported
	[Oct18 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000048 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001047] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=000000006094aa8a
	[  +0.001123] FS-Cache: O-key=[10] '34323938373639393330'
	[  +0.000853] FS-Cache: N-cookie c=00000049 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=000000001487bd7a
	[  +0.001121] FS-Cache: N-key=[10] '34323938373639393330'
	[Oct18 12:36] overlayfs: idmapped layers are currently not supported
	[Oct18 12:37] overlayfs: idmapped layers are currently not supported
	[Oct18 12:38] overlayfs: idmapped layers are currently not supported
	[Oct18 12:40] overlayfs: idmapped layers are currently not supported
	[Oct18 12:41] overlayfs: idmapped layers are currently not supported
	[Oct18 12:42] overlayfs: idmapped layers are currently not supported
	[  +3.761821] overlayfs: idmapped layers are currently not supported
	[ +36.492252] overlayfs: idmapped layers are currently not supported
	[Oct18 12:43] overlayfs: idmapped layers are currently not supported
	[Oct18 12:44] overlayfs: idmapped layers are currently not supported
	[  +3.556272] overlayfs: idmapped layers are currently not supported
	[Oct18 12:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af] <==
	{"level":"info","ts":"2025-10-18T12:46:18.326152Z","caller":"traceutil/trace.go:172","msg":"trace[280274257] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:2365; }","duration":"148.037419ms","start":"2025-10-18T12:46:18.178111Z","end":"2025-10-18T12:46:18.326148Z","steps":["trace[280274257] 'agreement among raft nodes before linearized reading'  (duration: 148.026195ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326175Z","caller":"traceutil/trace.go:172","msg":"trace[1603022509] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:2365; }","duration":"148.083557ms","start":"2025-10-18T12:46:18.178088Z","end":"2025-10-18T12:46:18.326172Z","steps":["trace[1603022509] 'agreement among raft nodes before linearized reading'  (duration: 148.07184ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326234Z","caller":"traceutil/trace.go:172","msg":"trace[130511930] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2365; }","duration":"148.157666ms","start":"2025-10-18T12:46:18.178071Z","end":"2025-10-18T12:46:18.326229Z","steps":["trace[130511930] 'agreement among raft nodes before linearized reading'  (duration: 148.112168ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326259Z","caller":"traceutil/trace.go:172","msg":"trace[1445912654] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2365; }","duration":"148.200399ms","start":"2025-10-18T12:46:18.178054Z","end":"2025-10-18T12:46:18.326254Z","steps":["trace[1445912654] 'agreement among raft nodes before linearized reading'  (duration: 148.188928ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326330Z","caller":"traceutil/trace.go:172","msg":"trace[1229954123] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:11; response_revision:2365; }","duration":"148.289377ms","start":"2025-10-18T12:46:18.178036Z","end":"2025-10-18T12:46:18.326326Z","steps":["trace[1229954123] 'agreement among raft nodes before linearized reading'  (duration: 148.231735ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T12:46:18.326351Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.328934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T12:46:18.344747Z","caller":"traceutil/trace.go:172","msg":"trace[734962470] range","detail":"{range_begin:/registry/priorityclasses; range_end:; response_count:0; response_revision:2365; }","duration":"166.713843ms","start":"2025-10-18T12:46:18.178019Z","end":"2025-10-18T12:46:18.344733Z","steps":["trace[734962470] 'agreement among raft nodes before linearized reading'  (duration: 148.321418ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326407Z","caller":"traceutil/trace.go:172","msg":"trace[513164834] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:2365; }","duration":"148.400984ms","start":"2025-10-18T12:46:18.178002Z","end":"2025-10-18T12:46:18.326403Z","steps":["trace[513164834] 'agreement among raft nodes before linearized reading'  (duration: 148.360253ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326438Z","caller":"traceutil/trace.go:172","msg":"trace[1825915532] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2365; }","duration":"148.452734ms","start":"2025-10-18T12:46:18.177982Z","end":"2025-10-18T12:46:18.326435Z","steps":["trace[1825915532] 'agreement among raft nodes before linearized reading'  (duration: 148.439975ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326463Z","caller":"traceutil/trace.go:172","msg":"trace[2054924881] range","detail":"{range_begin:/registry/deployments; range_end:; response_count:0; response_revision:2365; }","duration":"149.526329ms","start":"2025-10-18T12:46:18.176933Z","end":"2025-10-18T12:46:18.326459Z","steps":["trace[2054924881] 'agreement among raft nodes before linearized reading'  (duration: 149.513963ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326514Z","caller":"traceutil/trace.go:172","msg":"trace[1418956280] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:3; response_revision:2365; }","duration":"149.59627ms","start":"2025-10-18T12:46:18.176913Z","end":"2025-10-18T12:46:18.326510Z","steps":["trace[1418956280] 'agreement among raft nodes before linearized reading'  (duration: 149.557213ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326539Z","caller":"traceutil/trace.go:172","msg":"trace[302604753] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:2365; }","duration":"149.648521ms","start":"2025-10-18T12:46:18.176885Z","end":"2025-10-18T12:46:18.326534Z","steps":["trace[302604753] 'agreement among raft nodes before linearized reading'  (duration: 149.637723ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326575Z","caller":"traceutil/trace.go:172","msg":"trace[1331174270] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:2365; }","duration":"149.692148ms","start":"2025-10-18T12:46:18.176868Z","end":"2025-10-18T12:46:18.326560Z","steps":["trace[1331174270] 'agreement among raft nodes before linearized reading'  (duration: 149.678757ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326642Z","caller":"traceutil/trace.go:172","msg":"trace[818132752] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2365; }","duration":"149.794918ms","start":"2025-10-18T12:46:18.176844Z","end":"2025-10-18T12:46:18.326639Z","steps":["trace[818132752] 'agreement among raft nodes before linearized reading'  (duration: 149.740657ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326670Z","caller":"traceutil/trace.go:172","msg":"trace[1713580724] range","detail":"{range_begin:/registry/volumeattributesclasses/; range_end:/registry/volumeattributesclasses0; response_count:0; response_revision:2365; }","duration":"149.889492ms","start":"2025-10-18T12:46:18.176775Z","end":"2025-10-18T12:46:18.326664Z","steps":["trace[1713580724] 'agreement among raft nodes before linearized reading'  (duration: 149.876634ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326715Z","caller":"traceutil/trace.go:172","msg":"trace[211047245] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:2; response_revision:2365; }","duration":"151.330336ms","start":"2025-10-18T12:46:18.175381Z","end":"2025-10-18T12:46:18.326712Z","steps":["trace[211047245] 'agreement among raft nodes before linearized reading'  (duration: 151.29754ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326875Z","caller":"traceutil/trace.go:172","msg":"trace[1405564136] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:29; response_revision:2365; }","duration":"151.511433ms","start":"2025-10-18T12:46:18.175359Z","end":"2025-10-18T12:46:18.326870Z","steps":["trace[1405564136] 'agreement among raft nodes before linearized reading'  (duration: 151.365405ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326922Z","caller":"traceutil/trace.go:172","msg":"trace[870780723] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:2365; }","duration":"151.583697ms","start":"2025-10-18T12:46:18.175334Z","end":"2025-10-18T12:46:18.326918Z","steps":["trace[870780723] 'agreement among raft nodes before linearized reading'  (duration: 151.551688ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326948Z","caller":"traceutil/trace.go:172","msg":"trace[951226190] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:2365; }","duration":"151.633584ms","start":"2025-10-18T12:46:18.175309Z","end":"2025-10-18T12:46:18.326943Z","steps":["trace[951226190] 'agreement among raft nodes before linearized reading'  (duration: 151.621252ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.326972Z","caller":"traceutil/trace.go:172","msg":"trace[400054300] range","detail":"{range_begin:/registry/resourceclaimtemplates/; range_end:/registry/resourceclaimtemplates0; response_count:0; response_revision:2365; }","duration":"151.740924ms","start":"2025-10-18T12:46:18.175227Z","end":"2025-10-18T12:46:18.326968Z","steps":["trace[400054300] 'agreement among raft nodes before linearized reading'  (duration: 151.728715ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.327065Z","caller":"traceutil/trace.go:172","msg":"trace[1661544509] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:2365; }","duration":"156.709429ms","start":"2025-10-18T12:46:18.170352Z","end":"2025-10-18T12:46:18.327061Z","steps":["trace[1661544509] 'agreement among raft nodes before linearized reading'  (duration: 156.627877ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.327290Z","caller":"traceutil/trace.go:172","msg":"trace[1650378666] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:71; response_revision:2365; }","duration":"156.952393ms","start":"2025-10-18T12:46:18.170333Z","end":"2025-10-18T12:46:18.327285Z","steps":["trace[1650378666] 'agreement among raft nodes before linearized reading'  (duration: 156.741454ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.327323Z","caller":"traceutil/trace.go:172","msg":"trace[428909626] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:2365; }","duration":"157.006294ms","start":"2025-10-18T12:46:18.170312Z","end":"2025-10-18T12:46:18.327318Z","steps":["trace[428909626] 'agreement among raft nodes before linearized reading'  (duration: 156.991525ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.369419Z","caller":"traceutil/trace.go:172","msg":"trace[317085595] transaction","detail":"{read_only:false; response_revision:2366; number_of_response:1; }","duration":"119.27978ms","start":"2025-10-18T12:46:18.250127Z","end":"2025-10-18T12:46:18.369407Z","steps":["trace[317085595] 'process raft request'  (duration: 118.872342ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T12:46:18.392088Z","caller":"traceutil/trace.go:172","msg":"trace[1241831869] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2370; }","duration":"111.572204ms","start":"2025-10-18T12:46:18.280506Z","end":"2025-10-18T12:46:18.392078Z","steps":["trace[1241831869] 'agreement among raft nodes before linearized reading'  (duration: 111.516351ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:51:01 up  4:33,  0 user,  load average: 0.66, 1.22, 1.64
	Linux ha-904693 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7079b16a9b7a2a39fa399b6c2af14323e7571db253c3823a3927f85257d9854] <==
	I1018 12:50:15.001953       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:24.996513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:24.996644       1 main.go:301] handling current node
	I1018 12:50:24.996670       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:24.996678       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:24.996841       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:24.996854       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:34.997838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:34.997874       1 main.go:301] handling current node
	I1018 12:50:34.997890       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:34.997896       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:34.998069       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:34.998081       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:44.996386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:44.996534       1 main.go:301] handling current node
	I1018 12:50:44.996575       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:44.996620       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:44.996810       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:44.996860       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:50:55.001909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:50:55.002019       1 main.go:301] handling current node
	I1018 12:50:55.002076       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:50:55.002117       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:50:55.002341       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:50:55.002385       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d] <==
	{"level":"warn","ts":"2025-10-18T12:46:18.148825Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026672c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148839Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015fd0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148853Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400202ed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148867Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011f43c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148880Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002666960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148896Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd2780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148670Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002ce03c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.151438Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002174960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.151912Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015fc5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155109Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155205Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155239Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a325a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155306Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002666960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155314Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400141f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160120Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027dc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160123Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160241Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bb0f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000ed9860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1018 12:46:33.558140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:46:36.235887       1 controller.go:667] quota admission added evaluator for: endpoints
	W1018 12:46:47.238564       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1018 12:46:47.262716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:47:10.772461       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:47:11.078494       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:47:11.124245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8] <==
	I1018 12:46:09.683548       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:46:10.407864       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:46:10.407894       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:46:10.409427       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:46:10.409610       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:46:10.409861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:46:10.409969       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:46:22.428900       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a] <==
	I1018 12:47:10.692829       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E1018 12:47:30.656861       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656970       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656983       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656990       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:30.656996       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657450       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657482       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657489       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657495       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657505       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	I1018 12:47:50.671214       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-904693-m03"
	I1018 12:47:50.721328       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-904693-m03"
	I1018 12:47:50.721365       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-j75n6"
	I1018 12:47:50.760722       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-j75n6"
	I1018 12:47:50.760993       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-904693-m03"
	I1018 12:47:50.808228       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-904693-m03"
	I1018 12:47:50.808276       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bckwd"
	I1018 12:47:50.847148       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bckwd"
	I1018 12:47:50.847260       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-904693-m03"
	I1018 12:47:50.881140       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-904693-m03"
	I1018 12:47:50.881190       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-904693-m03"
	I1018 12:47:50.922459       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-904693-m03"
	I1018 12:47:50.922494       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-904693-m03"
	I1018 12:47:50.962354       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-904693-m03"
	
	
	==> kube-proxy [664bc261a20461615c227d76978fcabbc9c19e3de0de14724a6fb0f9bbcb8676] <==
	E1018 12:45:30.503531       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	I1018 12:45:30.503572       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1018 12:45:34.448156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:34.448255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:34.448188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:34.448343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:37.516021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:37.516021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:37.516161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:37.516292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:43.916094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:43.916094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:43.916214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:43.916222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:43.916267       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:45:54.700040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:54.700039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:54.700156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:54.700208       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:45:54.700282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:46:09.964095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:46:09.964311       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:46:13.036094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:46:13.036094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:46:16.108095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-scheduler [10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de] <==
	I1018 12:44:43.483110       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:44:43.485678       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:44:43.485928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:44:43.485983       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:44:43.486026       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:44:43.495638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:44:43.497112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:44:43.500234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:44:43.500370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:44:43.500447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:44:43.500535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:44:43.500610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:44:43.500685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:44:43.500757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:44:43.500889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:44:43.500972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:44:43.501347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:44:43.502109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:44:43.501648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:44:43.501688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:44:43.501702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:44:43.502170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:44:43.501546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:44:43.502196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 12:44:44.986536       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:45:47 ha-904693 kubelet[799]: I1018 12:45:47.150859     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:45:47 ha-904693 kubelet[799]: E1018 12:45:47.151001     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:45:51 ha-904693 kubelet[799]: E1018 12:45:51.687581     799 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-ha-904693)" podUID="d3c5d3145f312260295e29de6ab47ebb" pod="kube-system/kube-controller-manager-ha-904693"
	Oct 18 12:45:52 ha-904693 kubelet[799]: E1018 12:45:52.693374     799 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-904693.186f96842d53c593  default   2360 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-904693,UID:ha-904693,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-904693 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-904693,},FirstTimestamp:2025-10-18 12:44:33 +0000 UTC,LastTimestamp:2025-10-18 12:44:33.857399662 +0000 UTC m=+0.292657332,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-904693,}"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.122804     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-904693?timeout=10s\": context deadline exceeded" interval="200ms"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.698869     799 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-904693\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-904693?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.699164     799 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Oct 18 12:45:55 ha-904693 kubelet[799]: I1018 12:45:55.781460     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:45:55 ha-904693 kubelet[799]: E1018 12:45:55.781682     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:04 ha-904693 kubelet[799]: E1018 12:46:04.324141     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-904693?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Oct 18 12:46:08 ha-904693 kubelet[799]: I1018 12:46:08.756735     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:46:14 ha-904693 kubelet[799]: E1018 12:46:14.725537     799 request.go:1196] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)"
	Oct 18 12:46:14 ha-904693 kubelet[799]: E1018 12:46:14.725613     799 controller.go:145] "Failed to ensure lease exists, will retry" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" interval="800ms"
	Oct 18 12:46:23 ha-904693 kubelet[799]: I1018 12:46:23.245175     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:46:23 ha-904693 kubelet[799]: I1018 12:46:23.245500     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:23 ha-904693 kubelet[799]: E1018 12:46:23.245643     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:25 ha-904693 kubelet[799]: I1018 12:46:25.781162     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:25 ha-904693 kubelet[799]: E1018 12:46:25.781843     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:26 ha-904693 kubelet[799]: I1018 12:46:26.573839     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:26 ha-904693 kubelet[799]: E1018 12:46:26.574012     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:39 ha-904693 kubelet[799]: I1018 12:46:39.758543     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:39 ha-904693 kubelet[799]: E1018 12:46:39.758726     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:53 ha-904693 kubelet[799]: I1018 12:46:53.756932     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:53 ha-904693 kubelet[799]: E1018 12:46:53.757548     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:47:07 ha-904693 kubelet[799]: I1018 12:47:07.756671     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-904693 -n ha-904693
helpers_test.go:269: (dbg) Run:  kubectl --context ha-904693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.075515407s)
ha_test.go:309: expected profile "ha-904693" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-904693\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-904693\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-904693\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-p
lugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fals
e,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-904693
helpers_test.go:243: (dbg) docker inspect ha-904693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e",
	        "Created": "2025-10-18T12:36:31.14853988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 892248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:44:25.97701543Z",
	            "FinishedAt": "2025-10-18T12:44:25.288916989Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/hosts",
	        "LogPath": "/var/lib/docker/containers/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e-json.log",
	        "Name": "/ha-904693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-904693:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-904693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e",
	                "LowerDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/731b7d94934d2edde93c52bdd71150265bb9357db6439a3e40cc6788221b811f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-904693",
	                "Source": "/var/lib/docker/volumes/ha-904693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-904693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-904693",
	                "name.minikube.sigs.k8s.io": "ha-904693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c50d05dd02fc73a6e1bf9086ad2446bd076fd521984307bb39ab5a499f23326",
	            "SandboxKey": "/var/run/docker/netns/9c50d05dd02f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-904693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d6:c0:3d:80:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee94edf185e561d017352654d9e728ff82b5f4b27507dd51d551497bab189810",
	                    "EndpointID": "255fc8c5c14856f51b7da7876d61e503ec6a3f85dd6b9147108386eebadf9c15",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-904693",
	                        "9e9432db50a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-904693 -n ha-904693
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 logs -n 25: (2.030520213s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-904693 ssh -n ha-904693-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp testdata/cp-test.txt ha-904693-m04:/home/docker/cp-test.txt                                                             │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693-m04.txt │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693:/home/docker/cp-test_ha-904693-m04_ha-904693.txt                       │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693.txt                                                 │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m02:/home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m02 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ cp      │ ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m03:/home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt               │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ ssh     │ ha-904693 ssh -n ha-904693-m03 sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:40 UTC │
	│ node    │ ha-904693 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:40 UTC │ 18 Oct 25 12:41 UTC │
	│ node    │ ha-904693 node start m02 --alsologtostderr -v 5                                                                                      │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:41 UTC │
	│ node    │ ha-904693 node list --alsologtostderr -v 5                                                                                           │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │                     │
	│ stop    │ ha-904693 stop --alsologtostderr -v 5                                                                                                │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:41 UTC │
	│ start   │ ha-904693 start --wait true --alsologtostderr -v 5                                                                                   │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:41 UTC │ 18 Oct 25 12:43 UTC │
	│ node    │ ha-904693 node list --alsologtostderr -v 5                                                                                           │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │                     │
	│ node    │ ha-904693 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │ 18 Oct 25 12:43 UTC │
	│ stop    │ ha-904693 stop --alsologtostderr -v 5                                                                                                │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:43 UTC │ 18 Oct 25 12:44 UTC │
	│ start   │ ha-904693 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:44 UTC │                     │
	│ node    │ ha-904693 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-904693 │ jenkins │ v1.37.0 │ 18 Oct 25 12:51 UTC │ 18 Oct 25 12:52 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:44:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:44:25.711916  892123 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:44:25.712088  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712119  892123 out.go:374] Setting ErrFile to fd 2...
	I1018 12:44:25.712138  892123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.712423  892123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:44:25.712837  892123 out.go:368] Setting JSON to false
	I1018 12:44:25.713721  892123 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":16018,"bootTime":1760775448,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:44:25.713821  892123 start.go:141] virtualization:  
	I1018 12:44:25.719185  892123 out.go:179] * [ha-904693] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:44:25.722230  892123 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:44:25.722359  892123 notify.go:220] Checking for updates...
	I1018 12:44:25.728356  892123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:44:25.731393  892123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:25.734246  892123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:44:25.737415  892123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:44:25.740192  892123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:44:25.743783  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:25.744347  892123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:44:25.769253  892123 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:44:25.769378  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.830176  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.820847832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.830279  892123 docker.go:318] overlay module found
	I1018 12:44:25.833295  892123 out.go:179] * Using the docker driver based on existing profile
	I1018 12:44:25.836144  892123 start.go:305] selected driver: docker
	I1018 12:44:25.836180  892123 start.go:925] validating driver "docker" against &{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.836325  892123 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:44:25.836440  892123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:44:25.891844  892123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:44:25.88247637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:44:25.892307  892123 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:44:25.892333  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:25.892393  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:25.892444  892123 start.go:349] cluster config:
	{Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:25.895566  892123 out.go:179] * Starting "ha-904693" primary control-plane node in "ha-904693" cluster
	I1018 12:44:25.898242  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:25.901058  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:25.903961  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:25.904124  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:25.904158  892123 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 12:44:25.904169  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:25.904245  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:25.904261  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:25.904405  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:25.923338  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:25.923361  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:25.923378  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:25.923408  892123 start.go:360] acquireMachinesLock for ha-904693: {Name:mk0b11e6cfae1fdc8dfba1eeb3a517fb42d395b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:25.923474  892123 start.go:364] duration metric: took 44.365µs to acquireMachinesLock for "ha-904693"
	I1018 12:44:25.923496  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:25.923506  892123 fix.go:54] fixHost starting: 
	I1018 12:44:25.923797  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:25.940565  892123 fix.go:112] recreateIfNeeded on ha-904693: state=Stopped err=<nil>
	W1018 12:44:25.940596  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:25.943864  892123 out.go:252] * Restarting existing docker container for "ha-904693" ...
	I1018 12:44:25.943958  892123 cli_runner.go:164] Run: docker start ha-904693
	I1018 12:44:26.194711  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:26.215813  892123 kic.go:430] container "ha-904693" state is running.
	I1018 12:44:26.216371  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:26.239035  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:26.240781  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:26.240964  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:26.264332  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:26.264643  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:26.264652  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:26.265571  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:29.415325  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.415348  892123 ubuntu.go:182] provisioning hostname "ha-904693"
	I1018 12:44:29.415411  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.433529  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.433861  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.433879  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693 && echo "ha-904693" | sudo tee /etc/hostname
	I1018 12:44:29.588755  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693
	
	I1018 12:44:29.588848  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:29.609700  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:29.610004  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:29.610025  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:29.760098  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:29.760127  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:29.760148  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:29.760157  892123 provision.go:84] configureAuth start
	I1018 12:44:29.760217  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:29.777989  892123 provision.go:143] copyHostCerts
	I1018 12:44:29.778029  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778061  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:29.778077  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:29.778149  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:29.778226  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778242  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:29.778247  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:29.778271  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:29.778308  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778329  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:29.778333  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:29.778355  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:29.778399  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693 san=[127.0.0.1 192.168.49.2 ha-904693 localhost minikube]
	I1018 12:44:31.047109  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:31.047193  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:31.047278  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.066067  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.172668  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:31.172743  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 12:44:31.191530  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:31.191692  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:31.211233  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:31.211300  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:31.230446  892123 provision.go:87] duration metric: took 1.47026349s to configureAuth
	I1018 12:44:31.230476  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:31.230724  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:31.230839  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.248755  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:31.249077  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33937 <nil> <nil>}
	I1018 12:44:31.249098  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:31.576103  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:31.576129  892123 machine.go:96] duration metric: took 5.335328605s to provisionDockerMachine
	I1018 12:44:31.576140  892123 start.go:293] postStartSetup for "ha-904693" (driver="docker")
	I1018 12:44:31.576162  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:31.576224  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:31.576268  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.597908  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.707679  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:31.711002  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:31.711071  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:31.711090  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:31.711155  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:31.711247  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:31.711259  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:31.711355  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:31.718886  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:31.736340  892123 start.go:296] duration metric: took 160.184199ms for postStartSetup
	I1018 12:44:31.736438  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:31.736480  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.754046  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.853280  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:31.858215  892123 fix.go:56] duration metric: took 5.934701373s for fixHost
	I1018 12:44:31.858243  892123 start.go:83] releasing machines lock for "ha-904693", held for 5.934757012s
	I1018 12:44:31.858326  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:44:31.875758  892123 ssh_runner.go:195] Run: cat /version.json
	I1018 12:44:31.875830  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.875893  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:31.875954  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:44:31.896371  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:31.899369  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:44:32.089885  892123 ssh_runner.go:195] Run: systemctl --version
	I1018 12:44:32.096829  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:32.132460  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:32.136865  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:32.136993  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:32.144884  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:32.144907  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:32.144959  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:32.145021  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:32.160437  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:32.173683  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:32.173774  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:32.189773  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:32.203204  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:32.313641  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:32.432880  892123 docker.go:234] disabling docker service ...
	I1018 12:44:32.432958  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:32.449965  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:32.464069  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:32.584779  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:32.701524  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:32.716906  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:32.732220  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:32.732290  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.741629  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:32.741721  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.750956  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.760523  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.769646  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:32.777805  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.786814  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.795384  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:32.804860  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:32.812429  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:32.820169  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:32.933627  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:44:33.073156  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:44:33.073243  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:44:33.077339  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:44:33.077414  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:44:33.081817  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:44:33.111160  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:44:33.111248  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.140441  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:44:33.172376  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:44:33.175295  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:44:33.191834  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:44:33.195889  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.206059  892123 kubeadm.go:883] updating cluster {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:44:33.206251  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:33.206309  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.242225  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.242255  892123 crio.go:433] Images already preloaded, skipping extraction
	I1018 12:44:33.242314  892123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:44:33.268715  892123 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 12:44:33.268738  892123 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:44:33.268746  892123 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 12:44:33.268859  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:44:33.268940  892123 ssh_runner.go:195] Run: crio config
	I1018 12:44:33.339264  892123 cni.go:84] Creating CNI manager for ""
	I1018 12:44:33.339288  892123 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1018 12:44:33.339305  892123 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:44:33.339328  892123 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-904693 NodeName:ha-904693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:44:33.339459  892123 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-904693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:44:33.339481  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:44:33.339539  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:44:33.352416  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:33.352526  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:44:33.352590  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:44:33.360442  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:44:33.360534  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 12:44:33.368315  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 12:44:33.381459  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:44:33.394655  892123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 12:44:33.407827  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:44:33.421345  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:44:33.425393  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:44:33.435521  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:33.547456  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:44:33.571606  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.2
	I1018 12:44:33.571630  892123 certs.go:195] generating shared ca certs ...
	I1018 12:44:33.571647  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:33.571882  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:44:33.572004  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:44:33.572021  892123 certs.go:257] generating profile certs ...
	I1018 12:44:33.572109  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:44:33.572141  892123 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44
	I1018 12:44:33.572159  892123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1018 12:44:34.089841  892123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 ...
	I1018 12:44:34.089879  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44: {Name:mk73ee01371c8601ccdf153e68cf18fb41b0caf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090092  892123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 ...
	I1018 12:44:34.090109  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44: {Name:mkc407effae516c519c94bd817f4f88bdad85974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:34.090201  892123 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt
	I1018 12:44:34.090356  892123 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.a7995e44 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key
	I1018 12:44:34.090505  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:44:34.090525  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:44:34.090542  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:44:34.090563  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:44:34.090582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:44:34.090598  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:44:34.090617  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:44:34.090634  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:44:34.090652  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:44:34.090706  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:44:34.090745  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:44:34.090766  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:44:34.090802  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:44:34.090831  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:44:34.090865  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:44:34.090911  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:34.090942  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.090959  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.090975  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.091691  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:44:34.111143  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:44:34.130224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:44:34.147895  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:44:34.166568  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:44:34.191542  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:44:34.218375  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:44:34.243094  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:44:34.264702  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:44:34.290199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:44:34.313998  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:44:34.341991  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:44:34.361379  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:44:34.380056  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:44:34.400140  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409637  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.409718  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:44:34.514177  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:44:34.526963  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:44:34.541968  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546450  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.546529  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:44:34.608344  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:44:34.616770  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:44:34.627781  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635676  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.635755  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:44:34.691087  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:44:34.700436  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:44:34.704339  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:44:34.762289  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:44:34.835373  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:44:34.908492  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:44:34.968701  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:44:35.018893  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:44:35.074866  892123 kubeadm.go:400] StartCluster: {Name:ha-904693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:44:35.075012  892123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 12:44:35.075100  892123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:44:35.116413  892123 cri.go:89] found id: "f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d"
	I1018 12:44:35.116441  892123 cri.go:89] found id: "adda974732675bf5434d1d2f50dcf1a62d7e89e192480dcbb5a9ffec2ab87ea9"
	I1018 12:44:35.116447  892123 cri.go:89] found id: "10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de"
	I1018 12:44:35.116470  892123 cri.go:89] found id: "2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af"
	I1018 12:44:35.116474  892123 cri.go:89] found id: "bb134bdda02b2b1865dbf7bfd965c0d86f8c2b7ee0818669fb4f4cfd3f5f8484"
	I1018 12:44:35.116478  892123 cri.go:89] found id: ""
	I1018 12:44:35.116537  892123 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 12:44:35.135127  892123 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:44:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 12:44:35.135230  892123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:44:35.147730  892123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:44:35.147766  892123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:44:35.147824  892123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:44:35.157524  892123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:44:35.158025  892123 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-904693" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.158160  892123 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "ha-904693" cluster setting kubeconfig missing "ha-904693" context setting]
	I1018 12:44:35.158473  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.159101  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:44:35.159857  892123 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 12:44:35.159896  892123 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 12:44:35.159940  892123 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 12:44:35.159949  892123 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 12:44:35.159955  892123 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 12:44:35.159960  892123 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 12:44:35.160422  892123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:44:35.173010  892123 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 12:44:35.173040  892123 kubeadm.go:601] duration metric: took 25.265992ms to restartPrimaryControlPlane
	I1018 12:44:35.173050  892123 kubeadm.go:402] duration metric: took 98.194754ms to StartCluster
	I1018 12:44:35.173077  892123 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.173159  892123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:44:35.173840  892123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:44:35.174085  892123 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:44:35.174116  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:44:35.174143  892123 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:44:35.174720  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.180626  892123 out.go:179] * Enabled addons: 
	I1018 12:44:35.183765  892123 addons.go:514] duration metric: took 9.629337ms for enable addons: enabled=[]
	I1018 12:44:35.183834  892123 start.go:246] waiting for cluster config update ...
	I1018 12:44:35.183849  892123 start.go:255] writing updated cluster config ...
	I1018 12:44:35.186931  892123 out.go:203] 
	I1018 12:44:35.190015  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:35.190154  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.193614  892123 out.go:179] * Starting "ha-904693-m02" control-plane node in "ha-904693" cluster
	I1018 12:44:35.196414  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:44:35.199358  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:44:35.202336  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:44:35.202376  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:44:35.202494  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:44:35.202510  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:44:35.202646  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.202901  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:44:35.244427  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:44:35.244451  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:44:35.244465  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:44:35.244491  892123 start.go:360] acquireMachinesLock for ha-904693-m02: {Name:mk6c2f485a3713f332b20d1d9fdf103954df7ac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:44:35.244553  892123 start.go:364] duration metric: took 42.085µs to acquireMachinesLock for "ha-904693-m02"
	I1018 12:44:35.244578  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:44:35.244587  892123 fix.go:54] fixHost starting: m02
	I1018 12:44:35.244844  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.277624  892123 fix.go:112] recreateIfNeeded on ha-904693-m02: state=Stopped err=<nil>
	W1018 12:44:35.277652  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:44:35.280995  892123 out.go:252] * Restarting existing docker container for "ha-904693-m02" ...
	I1018 12:44:35.281088  892123 cli_runner.go:164] Run: docker start ha-904693-m02
	I1018 12:44:35.680444  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:35.711547  892123 kic.go:430] container "ha-904693-m02" state is running.
	I1018 12:44:35.711981  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:35.739312  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:44:35.739556  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:44:35.739755  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:35.771422  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:35.771751  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:35.771766  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:44:35.772400  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:44:39.052293  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.052316  892123 ubuntu.go:182] provisioning hostname "ha-904693-m02"
	I1018 12:44:39.052382  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.080876  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.081188  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.081199  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m02 && echo "ha-904693-m02" | sudo tee /etc/hostname
	I1018 12:44:39.340056  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m02
	
	I1018 12:44:39.340143  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:39.373338  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:39.373649  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:39.373672  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:44:39.630504  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:44:39.630578  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:44:39.630612  892123 ubuntu.go:190] setting up certificates
	I1018 12:44:39.630652  892123 provision.go:84] configureAuth start
	I1018 12:44:39.630734  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:39.675738  892123 provision.go:143] copyHostCerts
	I1018 12:44:39.675784  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675817  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:44:39.675825  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:44:39.675904  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:44:39.675996  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676014  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:44:39.676020  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:44:39.676047  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:44:39.676086  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676101  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:44:39.676105  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:44:39.676126  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:44:39.676170  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m02 san=[127.0.0.1 192.168.49.3 ha-904693-m02 localhost minikube]
	I1018 12:44:40.218129  892123 provision.go:177] copyRemoteCerts
	I1018 12:44:40.218244  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:44:40.218322  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.236440  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:40.357787  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:44:40.357851  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:44:40.393588  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:44:40.393654  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:44:40.414582  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:44:40.414689  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:44:40.435522  892123 provision.go:87] duration metric: took 804.840193ms to configureAuth
	I1018 12:44:40.435591  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:44:40.435862  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:40.436016  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:40.461848  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:44:40.462155  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33942 <nil> <nil>}
	I1018 12:44:40.462170  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:44:41.604038  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:44:41.604122  892123 machine.go:96] duration metric: took 5.864556191s to provisionDockerMachine
	I1018 12:44:41.604150  892123 start.go:293] postStartSetup for "ha-904693-m02" (driver="docker")
	I1018 12:44:41.604193  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:44:41.604277  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:44:41.604362  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.635166  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.769733  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:44:41.773730  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:44:41.773761  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:44:41.773774  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:44:41.773829  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:44:41.773913  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:44:41.773925  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:44:41.774028  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:44:41.784876  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:44:41.825486  892123 start.go:296] duration metric: took 221.293722ms for postStartSetup
	I1018 12:44:41.825575  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:44:41.825622  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:41.853550  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:41.984344  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:44:41.992594  892123 fix.go:56] duration metric: took 6.7479992s for fixHost
	I1018 12:44:41.992625  892123 start.go:83] releasing machines lock for "ha-904693-m02", held for 6.748059204s
	I1018 12:44:41.992720  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m02
	I1018 12:44:42.035079  892123 out.go:179] * Found network options:
	I1018 12:44:42.038018  892123 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 12:44:42.041005  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:44:42.041052  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:44:42.041143  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:44:42.041192  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.041445  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:44:42.041506  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m02
	I1018 12:44:42.075479  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.085476  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33942 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m02/id_rsa Username:docker}
	I1018 12:44:42.517801  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:44:42.530700  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:44:42.530775  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:44:42.589914  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:44:42.589943  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:44:42.589978  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:44:42.590036  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:44:42.638987  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:44:42.723590  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:44:42.723700  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:44:42.768190  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:44:42.816075  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:44:43.152357  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:44:43.513952  892123 docker.go:234] disabling docker service ...
	I1018 12:44:43.514041  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:44:43.540222  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:44:43.562890  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:44:43.881442  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:44:44.114079  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:44:44.148782  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:44:44.181271  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:44:44.181354  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.192614  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:44:44.192694  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.213293  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.227635  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.246173  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:44:44.260324  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.277559  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.289335  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:44:44.301185  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:44:44.310422  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:44:44.319878  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:44:44.623936  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:14.836486  892123 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.212505487s)
	I1018 12:46:14.836513  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:14.836567  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:14.840408  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:14.840481  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:14.844075  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:14.874919  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:14.875007  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.904606  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:14.937907  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:14.940843  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:14.943768  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:14.960925  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:14.964939  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:14.975051  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:14.975310  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:14.975576  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:14.993112  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:14.993392  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.3
	I1018 12:46:14.993406  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:14.993423  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:14.993545  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:14.993591  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:14.993605  892123 certs.go:257] generating profile certs ...
	I1018 12:46:14.993681  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key
	I1018 12:46:14.993743  892123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key.385e3bc8
	I1018 12:46:14.993827  892123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key
	I1018 12:46:14.993839  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:14.993853  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:14.993868  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:14.993881  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:14.993896  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 12:46:14.993915  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 12:46:14.993927  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 12:46:14.993940  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 12:46:14.993992  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:14.994023  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:14.994036  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:14.994064  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:14.994090  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:14.994114  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:14.994159  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:14.994187  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:14.994202  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:14.994213  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:14.994275  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:46:15.025861  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:46:15.144065  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 12:46:15.148291  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 12:46:15.157425  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 12:46:15.161586  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 12:46:15.170498  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 12:46:15.175977  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 12:46:15.189359  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 12:46:15.193340  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1018 12:46:15.202262  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 12:46:15.206095  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 12:46:15.214849  892123 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 12:46:15.219115  892123 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 12:46:15.228620  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:15.247537  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:15.267038  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:15.296556  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:15.317916  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 12:46:15.336289  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:46:15.353950  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:46:15.373731  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 12:46:15.394136  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:15.413750  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:15.434057  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:15.453144  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 12:46:15.471392  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 12:46:15.487802  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 12:46:15.504613  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1018 12:46:15.518898  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 12:46:15.533487  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 12:46:15.549167  892123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 12:46:15.564048  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:15.570605  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:15.580039  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584075  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.584195  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:15.625980  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:15.634627  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:15.643508  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647557  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.647647  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:15.691919  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:15.702734  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:15.718411  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727743  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.727823  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:15.778694  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:15.788950  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:15.793324  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:46:15.837931  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:46:15.890538  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:46:15.937757  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:46:15.981996  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:46:16.024029  892123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:46:16.066839  892123 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 12:46:16.067008  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:16.067038  892123 kube-vip.go:115] generating kube-vip config ...
	I1018 12:46:16.067094  892123 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 12:46:16.080115  892123 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:46:16.080187  892123 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 12:46:16.080261  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:16.089171  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:16.089252  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 12:46:16.097956  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:16.111585  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:16.125002  892123 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 12:46:16.140735  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:16.144498  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:16.154452  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.294558  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.309039  892123 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 12:46:16.309487  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:16.314390  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:16.317527  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:16.453319  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:16.468140  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:16.468216  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:16.468510  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198175  892123 node_ready.go:49] node "ha-904693-m02" is "Ready"
	I1018 12:46:18.198201  892123 node_ready.go:38] duration metric: took 1.729664998s for node "ha-904693-m02" to be "Ready" ...
	I1018 12:46:18.198217  892123 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:46:18.198278  892123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:46:18.217101  892123 api_server.go:72] duration metric: took 1.908011588s to wait for apiserver process to appear ...
	I1018 12:46:18.217124  892123 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:46:18.217144  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.251260  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.251333  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:18.717735  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:18.729578  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:18.729649  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.217875  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.234644  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.234731  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:19.717308  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:19.729198  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:19.729276  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.217475  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.226275  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.226367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:20.718079  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:20.726851  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:20.727067  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.217664  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.226730  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.226816  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:21.717402  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:21.728568  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:21.728640  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.217240  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.225394  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.225426  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:22.717613  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:22.726996  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:22.727026  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.217597  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.225993  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.226022  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:23.717452  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:23.725986  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:23.726020  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.217619  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.225855  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.225886  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:24.717271  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:24.726978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:24.727011  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.217464  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.225978  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.226004  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:25.717529  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:25.731613  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:25.731677  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.218064  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.226417  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.226450  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:26.718040  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:26.726172  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:26.726250  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.217881  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.226010  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.226046  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:27.717254  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:27.725448  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:27.725489  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.218129  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.226589  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.226622  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:28.717746  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:28.726371  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:28.726417  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.217874  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.227348  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.227383  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:29.717795  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:29.726023  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:29.726062  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.217207  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.225947  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.225992  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:30.717357  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:30.726514  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:30.726562  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.218170  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.226772  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.226808  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:31.717389  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:31.725579  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:31.725615  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.217261  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.225609  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.225686  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:32.717295  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:32.725527  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:32.725556  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.218209  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.226454  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.226485  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:33.718051  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:33.726332  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:33.726367  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.217582  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.230124  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.230163  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:34.717418  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:34.725438  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:34.725472  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.218121  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.228207  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.228243  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:35.717991  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:35.726425  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:35.726455  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.217618  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.226126  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.226154  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:36.717772  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:36.726079  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:36.726111  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.217227  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.228703  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.228733  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:37.717268  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:37.725340  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:37.725369  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.217518  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.225890  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.225933  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:38.718202  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:38.726360  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:38.726663  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.217201  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.225234  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.225266  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:39.717823  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:39.726660  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:39.726690  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.217283  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.226559  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.226603  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:40.717962  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:40.744008  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:40.744037  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.217607  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.225920  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.225964  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:41.717267  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:41.725273  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:41.725300  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.217469  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.226383  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.226419  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:42.718060  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:42.726681  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:42.726711  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.217278  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.225508  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.225544  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:43.718222  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:43.728152  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:43.728184  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.217541  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.225638  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.225666  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:44.717265  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:44.725307  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:44.725339  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.220300  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.238786  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.238819  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:45.717206  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:45.726748  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:45.726780  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.217362  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.225787  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.225815  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:46.718214  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:46.727280  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:46:46.727306  892123 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:46:47.217946  892123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:46:47.226669  892123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:46:47.227992  892123 api_server.go:141] control plane version: v1.34.1
	I1018 12:46:47.228017  892123 api_server.go:131] duration metric: took 29.010884789s to wait for apiserver health ...
	I1018 12:46:47.228027  892123 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:46:47.235895  892123 system_pods.go:59] 26 kube-system pods found
	I1018 12:46:47.235980  892123 system_pods.go:61] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.236002  892123 system_pods.go:61] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.236024  892123 system_pods.go:61] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.236074  892123 system_pods.go:61] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.236094  892123 system_pods.go:61] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.236117  892123 system_pods.go:61] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.236155  892123 system_pods.go:61] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.236181  892123 system_pods.go:61] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.236201  892123 system_pods.go:61] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.236241  892123 system_pods.go:61] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.236265  892123 system_pods.go:61] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.236284  892123 system_pods.go:61] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.236324  892123 system_pods.go:61] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.236350  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.236373  892123 system_pods.go:61] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.236410  892123 system_pods.go:61] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.236436  892123 system_pods.go:61] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.236457  892123 system_pods.go:61] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.236497  892123 system_pods.go:61] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.236526  892123 system_pods.go:61] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.236548  892123 system_pods.go:61] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.236581  892123 system_pods.go:61] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.236605  892123 system_pods.go:61] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.236627  892123 system_pods.go:61] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.236663  892123 system_pods.go:61] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.236688  892123 system_pods.go:61] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.236711  892123 system_pods.go:74] duration metric: took 8.677343ms to wait for pod list to return data ...
	I1018 12:46:47.236747  892123 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:46:47.240740  892123 default_sa.go:45] found service account: "default"
	I1018 12:46:47.240819  892123 default_sa.go:55] duration metric: took 4.047411ms for default service account to be created ...
	I1018 12:46:47.240844  892123 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:46:47.252062  892123 system_pods.go:86] 26 kube-system pods found
	I1018 12:46:47.252100  892123 system_pods.go:89] "coredns-66bc5c9577-np459" [33cb0fc6-b8df-4149-85da-e6417a6de391] Running
	I1018 12:46:47.252109  892123 system_pods.go:89] "coredns-66bc5c9577-w4mzd" [76a15b28-7a49-47e3-baf1-12c18b680ade] Running
	I1018 12:46:47.252113  892123 system_pods.go:89] "etcd-ha-904693" [6a65bc4e-41f8-48fd-a64a-c1920f35caf4] Running
	I1018 12:46:47.252143  892123 system_pods.go:89] "etcd-ha-904693-m02" [94a516fe-dcfe-4e93-baa3-fb16142884cc] Running
	I1018 12:46:47.252155  892123 system_pods.go:89] "etcd-ha-904693-m03" [f2d9e3be-8b60-4549-a41d-d8bdab528ea7] Running
	I1018 12:46:47.252160  892123 system_pods.go:89] "kindnet-j75n6" [b30c1029-3217-42b0-87d1-f96b2bf02858] Running
	I1018 12:46:47.252164  892123 system_pods.go:89] "kindnet-lwbfx" [2053e657-7951-4224-aac4-980e101bc352] Running
	I1018 12:46:47.252174  892123 system_pods.go:89] "kindnet-nqql7" [061fc15c-de36-4123-8bb7-ac3d65a44ba4] Running
	I1018 12:46:47.252178  892123 system_pods.go:89] "kindnet-z2jqf" [adbd3882-090c-44e7-96c0-8374c4c8761e] Running
	I1018 12:46:47.252186  892123 system_pods.go:89] "kube-apiserver-ha-904693" [21472a04-9583-4452-949b-6d0d5c44ca4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:46:47.252198  892123 system_pods.go:89] "kube-apiserver-ha-904693-m02" [095e1af5-5aea-4dad-aa89-09611005c26b] Running
	I1018 12:46:47.252219  892123 system_pods.go:89] "kube-apiserver-ha-904693-m03" [ac2fa248-fb39-471a-953b-5caff0045c23] Running
	I1018 12:46:47.252234  892123 system_pods.go:89] "kube-controller-manager-ha-904693" [e46c064c-8863-43f6-8049-bc7f6b5fd6e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:46:47.252239  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m02" [01ced66c-fe9a-49cb-96f9-f117382aaa39] Running
	I1018 12:46:47.252247  892123 system_pods.go:89] "kube-controller-manager-ha-904693-m03" [b2a37b0a-af53-4e5f-b048-630fa65a4562] Running
	I1018 12:46:47.252252  892123 system_pods.go:89] "kube-proxy-25w58" [8120ec45-9954-42fc-ba8c-1784f050d7c7] Running
	I1018 12:46:47.252256  892123 system_pods.go:89] "kube-proxy-bckwd" [3ef760c9-0925-40c4-a43d-3dc1bc11a4f3] Running
	I1018 12:46:47.252260  892123 system_pods.go:89] "kube-proxy-s8rqn" [1b0abab1-7503-4dbb-874d-3a89837e39b8] Running
	I1018 12:46:47.252264  892123 system_pods.go:89] "kube-proxy-xvnxv" [1babac5c-cb8e-4b88-8a73-387df9d8b652] Running
	I1018 12:46:47.252277  892123 system_pods.go:89] "kube-scheduler-ha-904693" [a40b4487-da19-47c0-a990-d459235cd8f0] Running
	I1018 12:46:47.252294  892123 system_pods.go:89] "kube-scheduler-ha-904693-m02" [32877fa9-7d21-4d37-9c42-855b6fd4c11f] Running
	I1018 12:46:47.252308  892123 system_pods.go:89] "kube-scheduler-ha-904693-m03" [fbe42864-50a4-4b9f-bee1-96f3e3db090d] Running
	I1018 12:46:47.252312  892123 system_pods.go:89] "kube-vip-ha-904693" [04fca9f1-a6fd-45a0-abb1-1b9226e1f8f4] Running
	I1018 12:46:47.252318  892123 system_pods.go:89] "kube-vip-ha-904693-m02" [2563b6ff-3a9b-487b-a469-d3a58046306b] Running
	I1018 12:46:47.252336  892123 system_pods.go:89] "kube-vip-ha-904693-m03" [67639c6c-f2c1-4066-999a-b1edb1875d5d] Running
	I1018 12:46:47.252342  892123 system_pods.go:89] "storage-provisioner" [d490933f-6cca-41d5-a5d3-d128def7ed62] Running
	I1018 12:46:47.252357  892123 system_pods.go:126] duration metric: took 11.472811ms to wait for k8s-apps to be running ...
	I1018 12:46:47.252376  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:47.252446  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:47.269517  892123 system_svc.go:56] duration metric: took 17.132227ms WaitForService to wait for kubelet
	I1018 12:46:47.269546  892123 kubeadm.go:586] duration metric: took 30.960462504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:47.269566  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:47.274201  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274235  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274248  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274253  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274257  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:47.274296  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:47.274304  892123 node_conditions.go:105] duration metric: took 4.713888ms to run NodePressure ...
	I1018 12:46:47.274322  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:47.274358  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:47.277881  892123 out.go:203] 
	I1018 12:46:47.280982  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:47.281113  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.284552  892123 out.go:179] * Starting "ha-904693-m04" worker node in "ha-904693" cluster
	I1018 12:46:47.288329  892123 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:46:47.290468  892123 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:46:47.293413  892123 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 12:46:47.293456  892123 cache.go:58] Caching tarball of preloaded images
	I1018 12:46:47.293503  892123 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:46:47.293595  892123 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 12:46:47.293607  892123 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 12:46:47.293757  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.314739  892123 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:46:47.314762  892123 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:46:47.314780  892123 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:46:47.314805  892123 start.go:360] acquireMachinesLock for ha-904693-m04: {Name:mk97ed96515b1272cbdea992e117b8911f5b1654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:46:47.314870  892123 start.go:364] duration metric: took 45.481µs to acquireMachinesLock for "ha-904693-m04"
	I1018 12:46:47.314893  892123 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:46:47.314902  892123 fix.go:54] fixHost starting: m04
	I1018 12:46:47.315155  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.332443  892123 fix.go:112] recreateIfNeeded on ha-904693-m04: state=Stopped err=<nil>
	W1018 12:46:47.332521  892123 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:46:47.335757  892123 out.go:252] * Restarting existing docker container for "ha-904693-m04" ...
	I1018 12:46:47.335864  892123 cli_runner.go:164] Run: docker start ha-904693-m04
	I1018 12:46:47.662072  892123 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:46:47.692999  892123 kic.go:430] container "ha-904693-m04" state is running.
	I1018 12:46:47.693365  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:47.716277  892123 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/config.json ...
	I1018 12:46:47.716634  892123 machine.go:93] provisionDockerMachine start ...
	I1018 12:46:47.716712  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:47.737549  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:47.737866  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:47.737883  892123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:46:47.738856  892123 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 12:46:50.891423  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:50.891500  892123 ubuntu.go:182] provisioning hostname "ha-904693-m04"
	I1018 12:46:50.891579  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:50.911143  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:50.911556  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:50.911590  892123 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-904693-m04 && echo "ha-904693-m04" | sudo tee /etc/hostname
	I1018 12:46:51.083384  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-904693-m04
	
	I1018 12:46:51.083546  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.103177  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.103480  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.103496  892123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-904693-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-904693-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-904693-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:46:51.264024  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:46:51.264123  892123 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 12:46:51.264148  892123 ubuntu.go:190] setting up certificates
	I1018 12:46:51.264172  892123 provision.go:84] configureAuth start
	I1018 12:46:51.264250  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:51.283401  892123 provision.go:143] copyHostCerts
	I1018 12:46:51.283446  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283481  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 12:46:51.283494  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 12:46:51.283573  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 12:46:51.283688  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283714  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 12:46:51.283724  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 12:46:51.283763  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 12:46:51.283815  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283836  892123 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 12:46:51.283845  892123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 12:46:51.283870  892123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 12:46:51.283923  892123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.ha-904693-m04 san=[127.0.0.1 192.168.49.5 ha-904693-m04 localhost minikube]
	I1018 12:46:51.487797  892123 provision.go:177] copyRemoteCerts
	I1018 12:46:51.487868  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:46:51.487911  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.510008  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:51.615718  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 12:46:51.615785  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 12:46:51.634401  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 12:46:51.634467  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:46:51.655136  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 12:46:51.655199  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:46:51.677312  892123 provision.go:87] duration metric: took 413.118272ms to configureAuth
	I1018 12:46:51.677338  892123 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:46:51.677569  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:51.677678  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:51.695105  892123 main.go:141] libmachine: Using SSH client type: native
	I1018 12:46:51.695420  892123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 33947 <nil> <nil>}
	I1018 12:46:51.695442  892123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 12:46:52.007291  892123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 12:46:52.007315  892123 machine.go:96] duration metric: took 4.290661536s to provisionDockerMachine
	I1018 12:46:52.007328  892123 start.go:293] postStartSetup for "ha-904693-m04" (driver="docker")
	I1018 12:46:52.007341  892123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:46:52.007440  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:46:52.007488  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.034279  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.148189  892123 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:46:52.151952  892123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:46:52.152034  892123 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:46:52.152060  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 12:46:52.152123  892123 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 12:46:52.152205  892123 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 12:46:52.152217  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /etc/ssl/certs/8360862.pem
	I1018 12:46:52.152317  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 12:46:52.160224  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:52.185280  892123 start.go:296] duration metric: took 177.935801ms for postStartSetup
	I1018 12:46:52.185367  892123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:46:52.185409  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.204012  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.309958  892123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:46:52.318024  892123 fix.go:56] duration metric: took 5.003113681s for fixHost
	I1018 12:46:52.318051  892123 start.go:83] releasing machines lock for "ha-904693-m04", held for 5.003169468s
	I1018 12:46:52.318132  892123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:46:52.338543  892123 out.go:179] * Found network options:
	I1018 12:46:52.341584  892123 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 12:46:52.344371  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344399  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344423  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 12:46:52.344438  892123 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 12:46:52.344508  892123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 12:46:52.344554  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.344831  892123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:46:52.344903  892123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:46:52.372515  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.374225  892123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33947 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:46:52.579686  892123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:46:52.584329  892123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:46:52.584402  892123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:46:52.593417  892123 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:46:52.593443  892123 start.go:495] detecting cgroup driver to use...
	I1018 12:46:52.593476  892123 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:46:52.593524  892123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 12:46:52.609004  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 12:46:52.623230  892123 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:46:52.623318  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:46:52.639717  892123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:46:52.657699  892123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:46:52.794706  892123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:46:52.921750  892123 docker.go:234] disabling docker service ...
	I1018 12:46:52.921870  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:46:52.939978  892123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:46:52.957529  892123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:46:53.104620  892123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:46:53.235063  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:46:53.249044  892123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:46:53.264364  892123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 12:46:53.264444  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.277945  892123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 12:46:53.278028  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.288323  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.297677  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.306794  892123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:46:53.314879  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.325157  892123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.333994  892123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 12:46:53.343268  892123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:46:53.351341  892123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:46:53.359207  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:53.488389  892123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 12:46:53.631149  892123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 12:46:53.631269  892123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 12:46:53.635894  892123 start.go:563] Will wait 60s for crictl version
	I1018 12:46:53.636001  892123 ssh_runner.go:195] Run: which crictl
	I1018 12:46:53.640586  892123 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:46:53.680864  892123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 12:46:53.680981  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.722237  892123 ssh_runner.go:195] Run: crio --version
	I1018 12:46:53.757817  892123 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 12:46:53.760732  892123 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 12:46:53.763576  892123 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 12:46:53.765748  892123 cli_runner.go:164] Run: docker network inspect ha-904693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:46:53.783043  892123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:46:53.787170  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:53.797279  892123 mustload.go:65] Loading cluster: ha-904693
	I1018 12:46:53.797525  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:53.797787  892123 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:46:53.816361  892123 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:46:53.816630  892123 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693 for IP: 192.168.49.5
	I1018 12:46:53.816637  892123 certs.go:195] generating shared ca certs ...
	I1018 12:46:53.816653  892123 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:46:53.816755  892123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 12:46:53.816795  892123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 12:46:53.816807  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 12:46:53.816820  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 12:46:53.816830  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 12:46:53.816843  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 12:46:53.816895  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 12:46:53.816925  892123 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 12:46:53.816933  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 12:46:53.816956  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 12:46:53.816977  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:46:53.816997  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 12:46:53.817039  892123 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 12:46:53.817065  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:53.817077  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem -> /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.817087  892123 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> /usr/share/ca-certificates/8360862.pem
	I1018 12:46:53.817105  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:46:53.836940  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 12:46:53.857942  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:46:53.880441  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:46:53.899127  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:46:53.928293  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 12:46:53.948582  892123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 12:46:53.967019  892123 ssh_runner.go:195] Run: openssl version
	I1018 12:46:53.973552  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 12:46:53.982588  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986756  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 12:46:53.986822  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 12:46:54.033044  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 12:46:54.042429  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 12:46:54.051990  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056823  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.056924  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 12:46:54.099082  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:46:54.107933  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:46:54.117094  892123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121498  892123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.121603  892123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:46:54.164645  892123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:46:54.179721  892123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:46:54.183706  892123 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:46:54.183754  892123 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1  false true} ...
	I1018 12:46:54.183838  892123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-904693-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-904693 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:46:54.183909  892123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:46:54.192639  892123 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:46:54.192775  892123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 12:46:54.200819  892123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 12:46:54.215040  892123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:46:54.229836  892123 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 12:46:54.234543  892123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:46:54.244928  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.376940  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.392818  892123 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1018 12:46:54.393235  892123 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:46:54.396046  892123 out.go:179] * Verifying Kubernetes components...
	I1018 12:46:54.399111  892123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:46:54.530712  892123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:46:54.553448  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 12:46:54.553522  892123 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 12:46:54.553818  892123 node_ready.go:35] waiting up to 6m0s for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557200  892123 node_ready.go:49] node "ha-904693-m04" is "Ready"
	I1018 12:46:54.557238  892123 node_ready.go:38] duration metric: took 3.399257ms for node "ha-904693-m04" to be "Ready" ...
	I1018 12:46:54.557252  892123 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:46:54.557309  892123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:46:54.571372  892123 system_svc.go:56] duration metric: took 14.111509ms WaitForService to wait for kubelet
	I1018 12:46:54.571412  892123 kubeadm.go:586] duration metric: took 178.543905ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:46:54.571434  892123 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:46:54.575184  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575215  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575227  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575232  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575236  892123 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:46:54.575242  892123 node_conditions.go:123] node cpu capacity is 2
	I1018 12:46:54.575247  892123 node_conditions.go:105] duration metric: took 3.806637ms to run NodePressure ...
	I1018 12:46:54.575260  892123 start.go:241] waiting for startup goroutines ...
	I1018 12:46:54.575287  892123 start.go:255] writing updated cluster config ...
	I1018 12:46:54.575609  892123 ssh_runner.go:195] Run: rm -f paused
	I1018 12:46:54.579787  892123 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:46:54.580332  892123 kapi.go:59] client config for ha-904693: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/ha-904693/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 12:46:54.597506  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603509  892123 pod_ready.go:94] pod "coredns-66bc5c9577-np459" is "Ready"
	I1018 12:46:54.603539  892123 pod_ready.go:86] duration metric: took 6.000704ms for pod "coredns-66bc5c9577-np459" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.603550  892123 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.611441  892123 pod_ready.go:94] pod "coredns-66bc5c9577-w4mzd" is "Ready"
	I1018 12:46:54.611468  892123 pod_ready.go:86] duration metric: took 7.909713ms for pod "coredns-66bc5c9577-w4mzd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.615301  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622147  892123 pod_ready.go:94] pod "etcd-ha-904693" is "Ready"
	I1018 12:46:54.622188  892123 pod_ready.go:86] duration metric: took 6.858682ms for pod "etcd-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.622213  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628869  892123 pod_ready.go:94] pod "etcd-ha-904693-m02" is "Ready"
	I1018 12:46:54.628906  892123 pod_ready.go:86] duration metric: took 6.68035ms for pod "etcd-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.628916  892123 pod_ready.go:83] waiting for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:54.781287  892123 request.go:683] "Waited before sending request" delay="152.209169ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-904693-m03"
	I1018 12:46:54.981063  892123 request.go:683] "Waited before sending request" delay="194.309357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:54.984206  892123 pod_ready.go:99] pod "etcd-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "etcd-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:54.984230  892123 pod_ready.go:86] duration metric: took 355.308487ms for pod "etcd-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.181697  892123 request.go:683] "Waited before sending request" delay="197.366801ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 12:46:55.185514  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:55.380841  892123 request.go:683] "Waited before sending request" delay="195.16471ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.581533  892123 request.go:683] "Waited before sending request" delay="196.391315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:55.781523  892123 request.go:683] "Waited before sending request" delay="95.293605ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693"
	I1018 12:46:55.981310  892123 request.go:683] "Waited before sending request" delay="196.367824ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.381644  892123 request.go:683] "Waited before sending request" delay="186.36368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.781281  892123 request.go:683] "Waited before sending request" delay="92.241215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:56.784454  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693" is "Ready"
	I1018 12:46:56.784481  892123 pod_ready.go:86] duration metric: took 1.598894155s for pod "kube-apiserver-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.784491  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:56.980828  892123 request.go:683] "Waited before sending request" delay="196.248142ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m02"
	I1018 12:46:57.181477  892123 request.go:683] "Waited before sending request" delay="197.376581ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m02"
	I1018 12:46:57.184898  892123 pod_ready.go:94] pod "kube-apiserver-ha-904693-m02" is "Ready"
	I1018 12:46:57.184987  892123 pod_ready.go:86] duration metric: took 400.485818ms for pod "kube-apiserver-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.185012  892123 pod_ready.go:83] waiting for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.381473  892123 request.go:683] "Waited before sending request" delay="196.32459ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-904693-m03"
	I1018 12:46:57.581071  892123 request.go:683] "Waited before sending request" delay="196.144823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693-m03"
	I1018 12:46:57.583949  892123 pod_ready.go:99] pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace is gone: node "ha-904693-m03" hosting pod "kube-apiserver-ha-904693-m03" is not found/running (skipping!): nodes "ha-904693-m03" not found
	I1018 12:46:57.583972  892123 pod_ready.go:86] duration metric: took 398.952558ms for pod "kube-apiserver-ha-904693-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.781459  892123 request.go:683] "Waited before sending request" delay="197.326545ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 12:46:57.785500  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:46:57.980788  892123 request.go:683] "Waited before sending request" delay="195.154281ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.181517  892123 request.go:683] "Waited before sending request" delay="197.28876ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.381504  892123 request.go:683] "Waited before sending request" delay="95.288468ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-904693"
	I1018 12:46:58.580784  892123 request.go:683] "Waited before sending request" delay="194.281533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:58.980851  892123 request.go:683] "Waited before sending request" delay="191.275019ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	I1018 12:46:59.381533  892123 request.go:683] "Waited before sending request" delay="92.286237ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-904693"
	W1018 12:46:59.792577  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:02.292675  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:04.293083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:06.791662  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:08.795381  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:11.291608  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:13.291844  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:15.792067  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:18.291597  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:20.293497  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:22.793443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	W1018 12:47:25.292520  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693" is not "Ready", error: <nil>
	I1018 12:47:26.791941  892123 pod_ready.go:94] pod "kube-controller-manager-ha-904693" is "Ready"
	I1018 12:47:26.791970  892123 pod_ready.go:86] duration metric: took 29.006442197s for pod "kube-controller-manager-ha-904693" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:47:26.791980  892123 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:47:28.799636  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:31.297899  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:33.298942  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:35.299122  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:37.799274  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:39.799373  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:42.301596  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:44.799207  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:47.299820  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:49.300296  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:51.798423  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:53.799278  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:56.298648  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:47:58.299303  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:00.306006  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:02.799215  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:04.802074  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:07.299319  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:09.799601  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:12.299633  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:14.799487  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:17.298286  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:19.298543  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:21.299532  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:23.799455  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:25.799781  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:28.299460  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:30.798185  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:32.799335  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:35.298104  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:37.299134  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:39.299272  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:41.299448  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:43.798462  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:45.799490  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:48.299004  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:50.299216  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:52.300129  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:54.301209  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:56.798691  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:48:59.299033  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:01.299417  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:03.798310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:05.798466  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:08.298020  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:10.298851  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:12.299443  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:14.798426  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:17.299094  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:19.299178  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:21.798879  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:24.299310  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:26.798113  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:29.298413  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:31.799065  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:33.799271  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:35.803906  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:38.299064  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:40.299407  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:42.299972  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:44.798560  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:46.798758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:48.799585  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:51.299544  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:53.300291  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:55.799555  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:49:58.298220  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:00.308856  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:02.799995  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:05.298036  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:07.300018  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:09.799328  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:12.298707  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:14.298758  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:16.798951  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:19.299158  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:21.799396  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:23.799509  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:26.298486  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:28.298553  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:30.298649  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:32.299193  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:34.800007  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:37.299243  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:39.799471  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:42.299390  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:44.798986  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:47.298083  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:49.300477  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:51.799774  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	W1018 12:50:54.298353  892123 pod_ready.go:104] pod "kube-controller-manager-ha-904693-m02" is not "Ready", error: <nil>
	I1018 12:50:54.580674  892123 pod_ready.go:86] duration metric: took 3m27.788657319s for pod "kube-controller-manager-ha-904693-m02" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:50:54.580708  892123 pod_ready.go:65] not all pods in "kube-system" namespace with "component=kube-controller-manager" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I1018 12:50:54.580723  892123 pod_ready.go:40] duration metric: took 4m0.000906152s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:50:54.583790  892123 out.go:203] 
	W1018 12:50:54.586624  892123 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I1018 12:50:54.589451  892123 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.15248919Z" level=info msg="Removing container: 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.162574204Z" level=info msg="Error loading conmon cgroup of container 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c: cgroup deleted" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:45:47 ha-904693 crio[667]: time="2025-10-18T12:45:47.166108461Z" level=info msg="Removed container 38930abbec5ed0ce218179fc2dffdc2fe464d75b9754449b3594bd7e8f1a073c: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=eb96e24d-0bf6-4cd9-8494-73ee2ff14c76 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.757273139Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d6ef62be-0670-480d-80ef-805d2541c64a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.75822135Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=edbacbee-34c6-44e3-8f4d-c6941ddde03a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.759324246Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=dc29f712-7c3a-4dac-a06a-164b273dd7b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.759550702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.7650266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.765739428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.786332369Z" level=info msg="Created container 6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=dc29f712-7c3a-4dac-a06a-164b273dd7b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.787077969Z" level=info msg="Starting container: 6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8" id=fda5d9b2-9dfd-4967-9d1d-f43575d0dec0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:46:08 ha-904693 crio[667]: time="2025-10-18T12:46:08.79106357Z" level=info msg="Started container" PID=1459 containerID=6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8 description=kube-system/kube-controller-manager-ha-904693/kube-controller-manager id=fda5d9b2-9dfd-4967-9d1d-f43575d0dec0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcaeeb6053eea250fdbfb9cf232775c5e74d7fbe49740ec76a8f8660f55d7bb
	Oct 18 12:46:22 ha-904693 conmon[1457]: conmon 6b9ca29a1030f2e300fa <ninfo>: container 1459 exited with status 1
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.247328418Z" level=info msg="Removing container: 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.255943755Z" level=info msg="Error loading conmon cgroup of container 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610: cgroup deleted" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:46:23 ha-904693 crio[667]: time="2025-10-18T12:46:23.260457493Z" level=info msg="Removed container 6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=0e8cc66f-432a-4252-a35c-aba4f2a6f2cf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.757343358Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=522be43b-97c6-4135-8419-131b53678f0e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.760799411Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=d33a4b7e-c8b6-4953-96d1-ec05fe811ee2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.763087148Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=ca79c353-2f92-46a9-b879-eb4c49528d96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.763391996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.776323243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.77706803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.797430732Z" level=info msg="Created container d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a: kube-system/kube-controller-manager-ha-904693/kube-controller-manager" id=ca79c353-2f92-46a9-b879-eb4c49528d96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.798666134Z" level=info msg="Starting container: d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a" id=234e12c8-0841-4b87-8ee3-3a75b5d265a4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 12:47:07 ha-904693 crio[667]: time="2025-10-18T12:47:07.808104346Z" level=info msg="Started container" PID=1512 containerID=d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a description=kube-system/kube-controller-manager-ha-904693/kube-controller-manager id=234e12c8-0841-4b87-8ee3-3a75b5d265a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3dcaeeb6053eea250fdbfb9cf232775c5e74d7fbe49740ec76a8f8660f55d7bb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	d0b92a674c67c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   7                   3dcaeeb6053ee       kube-controller-manager-ha-904693   kube-system
	6b9ca29a1030f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   6                   3dcaeeb6053ee       kube-controller-manager-ha-904693   kube-system
	e1f431489a678       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       4                   6974f2ca4c496       storage-provisioner                 kube-system
	77f72db48997f       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  3                   3f717be18b100       kube-vip-ha-904693                  kube-system
	3ed6de721b810       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   81c0a2ba3eb27       coredns-66bc5c9577-np459            kube-system
	56bb35c643a21       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   1229fa54d0b21       busybox-7b57f96db7-v452k            default
	5956d42910b21       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   a43d3d54495f1       coredns-66bc5c9577-w4mzd            kube-system
	b3ff0956e2bae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       3                   6974f2ca4c496       storage-provisioner                 kube-system
	b7079b16a9b7a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   d48f01f8d4f05       kindnet-z2jqf                       kube-system
	664bc261a2046       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                2                   d2c7a02dbdc37       kube-proxy-xvnxv                    kube-system
	f3e12646a28ac       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Running             kube-apiserver            3                   2e67607845f25       kube-apiserver-ha-904693            kube-system
	10798af55ae16       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   76601f4f16313       kube-scheduler-ha-904693            kube-system
	2df8ceef3f112       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      2                   cd330999b4f8d       etcd-ha-904693                      kube-system
	bb134bdda02b2       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Exited              kube-vip                  2                   3f717be18b100       kube-vip-ha-904693                  kube-system
	
	
	==> coredns [3ed6de721b81080e2d7009286cc18bd29f76863256af50d7e4af0f831a5e0461] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39165 - 29689 "HINFO IN 1724432357811573338.8138158095689922977. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017539888s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [5956d42910b21e70d3584ad16135f23f6c36232c73ad84e364d7d969d267b3ce] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58851 - 10237 "HINFO IN 6142564933790260897.8896674369146005175. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017439783s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-904693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_37_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:36:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:52:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:52:12 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:52:12 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:52:12 +0000   Sat, 18 Oct 2025 12:36:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:52:12 +0000   Sat, 18 Oct 2025 12:37:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-904693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                281bd447-f1be-4669-83e5-596eea808f91
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v452k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-np459             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-66bc5c9577-w4mzd             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-904693                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-z2jqf                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-904693             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-904693    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-xvnxv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-904693             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-904693                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m42s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 9m40s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)      kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           15m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-904693 status is now: NodeReady
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           9m42s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           9m38s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           9m7s                   node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   Starting                 7m55s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m55s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m55s (x8 over 7m55s)  kubelet          Node ha-904693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m55s (x8 over 7m55s)  kubelet          Node ha-904693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m55s (x8 over 7m55s)  kubelet          Node ha-904693 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	  Normal   RegisteredNode           52s                    node-controller  Node ha-904693 event: Registered Node ha-904693 in Controller
	
	
	Name:               ha-904693-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_37_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:37:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:52:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:51:37 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:51:37 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:51:37 +0000   Sat, 18 Oct 2025 12:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:51:37 +0000   Sat, 18 Oct 2025 12:38:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-904693-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                731d6d01-e152-4180-b869-d1cbd652f7b0
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hrdj5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-904693-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-lwbfx                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-904693-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-904693-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-s8rqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-904693-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-904693-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m39s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 9m31s                  kube-proxy       
	  Normal   RegisteredNode           14m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-904693-m02 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-904693-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-904693-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m42s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           9m38s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           9m7s                   node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   Starting                 7m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m51s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m51s (x8 over 7m51s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m51s (x8 over 7m51s)  kubelet          Node ha-904693-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m51s (x8 over 7m51s)  kubelet          Node ha-904693-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        6m51s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	  Normal   RegisteredNode           52s                    node-controller  Node ha-904693-m02 event: Registered Node ha-904693-m02 in Controller
	
	
	Name:               ha-904693-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_40_18_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:40:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:51:57 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:51:57 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:51:57 +0000   Sat, 18 Oct 2025 12:40:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:51:57 +0000   Sat, 18 Oct 2025 12:40:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-904693-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5cf17c72-8409-4937-903b-03a3a82789c6
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2bmmd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 kindnet-nqql7               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-25w58            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m10s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 8m49s                  kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-904693-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m42s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           9m38s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Warning  CgroupV1                 9m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    9m9s (x8 over 9m12s)   kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  9m9s (x8 over 9m12s)   kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m9s (x8 over 9m12s)   kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m7s                   node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   Starting                 5m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m36s (x8 over 5m40s)  kubelet          Node ha-904693-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m36s (x8 over 5m40s)  kubelet          Node ha-904693-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m36s (x8 over 5m40s)  kubelet          Node ha-904693-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m18s                  node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	  Normal   RegisteredNode           52s                    node-controller  Node ha-904693-m04 event: Registered Node ha-904693-m04 in Controller
	
	
	Name:               ha-904693-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-904693-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=ha-904693
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T12_51_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:51:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-904693-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:52:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:52:23 +0000   Sat, 18 Oct 2025 12:51:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:52:23 +0000   Sat, 18 Oct 2025 12:51:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:52:23 +0000   Sat, 18 Oct 2025 12:51:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:52:23 +0000   Sat, 18 Oct 2025 12:52:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-904693-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                11e2918d-37af-4864-9ce0-be6daa72bd1a
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-904693-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-jsj6h                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-ha-904693-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-ha-904693-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-7rlqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-ha-904693-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-vip-ha-904693-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        45s   kube-proxy       
	  Normal  RegisteredNode  48s   node-controller  Node ha-904693-m05 event: Registered Node ha-904693-m05 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node ha-904693-m05 event: Registered Node ha-904693-m05 in Controller
	
	
	==> dmesg <==
	[  +0.001107] FS-Cache: N-key=[10] '34323937363632323639'
	[Oct18 12:15] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 12:16] overlayfs: idmapped layers are currently not supported
	[Oct18 12:22] overlayfs: idmapped layers are currently not supported
	[Oct18 12:23] overlayfs: idmapped layers are currently not supported
	[Oct18 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000048 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001047] FS-Cache: O-cookie d=00000000d8d7ca74{9P.session} n=000000006094aa8a
	[  +0.001123] FS-Cache: O-key=[10] '34323938373639393330'
	[  +0.000853] FS-Cache: N-cookie c=00000049 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001032] FS-Cache: N-cookie d=00000000d8d7ca74{9P.session} n=000000001487bd7a
	[  +0.001121] FS-Cache: N-key=[10] '34323938373639393330'
	[Oct18 12:36] overlayfs: idmapped layers are currently not supported
	[Oct18 12:37] overlayfs: idmapped layers are currently not supported
	[Oct18 12:38] overlayfs: idmapped layers are currently not supported
	[Oct18 12:40] overlayfs: idmapped layers are currently not supported
	[Oct18 12:41] overlayfs: idmapped layers are currently not supported
	[Oct18 12:42] overlayfs: idmapped layers are currently not supported
	[  +3.761821] overlayfs: idmapped layers are currently not supported
	[ +36.492252] overlayfs: idmapped layers are currently not supported
	[Oct18 12:43] overlayfs: idmapped layers are currently not supported
	[Oct18 12:44] overlayfs: idmapped layers are currently not supported
	[  +3.556272] overlayfs: idmapped layers are currently not supported
	[Oct18 12:47] overlayfs: idmapped layers are currently not supported
	[Oct18 12:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2df8ceef3f1125567cb2b22627f6c2b90e7425331ffa5e5bbe8a97dcb849d5af] <==
	{"level":"info","ts":"2025-10-18T12:51:27.855342Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2cb626afcaec816f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-18T12:51:27.855417Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"error","ts":"2025-10-18T12:51:28.008898Z","caller":"etcdserver/server.go:1585","msg":"rejecting promote learner: learner is not ready","learner-ready-percent":0,"ready-percent-threshold":0.9,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).isLearnerReady\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1585\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).mayPromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1526\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).promoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1498\ngo.etcd.io/etcd/server/v3/etcdserver.(*EtcdServer).PromoteMember\n\tgo.etcd.io/etcd/server/v3/etcdserver/server.go:1450\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.(*ClusterServer).MemberPromote\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/member.go:101\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler.func1\n\tgo.etcd.io/etcd/api/v3@v3.6.4/etcdserverpb/rpc.pb.go:7432\ngo.etcd.io/etcd/server/v3/etcdserv
er/api/v3rpc.Server.(*ServerMetrics).UnaryServerInterceptor.UnaryServerInterceptor.func12\n\tgithub.com/grpc-ecosystem/go-grpc-middleware/v2@v2.1.0/interceptors/server.go:22\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newUnaryInterceptor.func5\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:74\ngoogle.golang.org/grpc.getChainUnaryHandler.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1217\ngo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc.Server.newLogUnaryInterceptor.func4\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/v3rpc/interceptor.go:81\ngoogle.golang.org/grpc.NewServer.chainUnaryServerInterceptors.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1208\ngo.etcd.io/etcd/api/v3/etcdserverpb._Cluster_MemberPromote_Handler\n\tgo.etcd.io/etcd/api/v3@v3.6.4/etcdserverpb/rpc.pb.go:7434\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoogle.golang.org/grpc@v1.
71.1/server.go:1405\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1815\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\tgoogle.golang.org/grpc@v1.71.1/server.go:1035"}
	{"level":"info","ts":"2025-10-18T12:51:28.119954Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2cb626afcaec816f","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-18T12:51:28.120116Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"warn","ts":"2025-10-18T12:51:28.462558Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:51:28.499377Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:51:28.507836Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(2709177748741186658 3221805119895798127 12593026477526642892)"}
	{"level":"info","ts":"2025-10-18T12:51:28.507972Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:28.508007Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"2cb626afcaec816f"}
	{"level":"warn","ts":"2025-10-18T12:51:28.545723Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2cb626afcaec816f","error":"failed to write 2cb626afcaec816f on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:54070: write: connection reset by peer)"}
	{"level":"warn","ts":"2025-10-18T12:51:28.545801Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"warn","ts":"2025-10-18T12:51:28.892997Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:28.951222Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:29.084419Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:29.155290Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:29.416142Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2cb626afcaec816f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-18T12:51:29.416252Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:29.417360Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"2cb626afcaec816f","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-18T12:51:29.417438Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2cb626afcaec816f"}
	{"level":"info","ts":"2025-10-18T12:51:36.108575Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T12:51:42.453752Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T12:51:57.855895Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"2cb626afcaec816f","bytes":6874043,"size":"6.9 MB","took":"30.072421902s"}
	{"level":"warn","ts":"2025-10-18T12:52:28.140268Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.099064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:370224"}
	{"level":"info","ts":"2025-10-18T12:52:28.140345Z","caller":"traceutil/trace.go:172","msg":"trace[297428102] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3730; }","duration":"175.198075ms","start":"2025-10-18T12:52:27.965133Z","end":"2025-10-18T12:52:28.140331Z","steps":["trace[297428102] 'range keys from bolt db'  (duration: 174.248674ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:52:28 up  4:35,  0 user,  load average: 2.93, 1.79, 1.80
	Linux ha-904693 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b7079b16a9b7a2a39fa399b6c2af14323e7571db253c3823a3927f85257d9854] <==
	I1018 12:51:55.002244       1 main.go:324] Node ha-904693-m05 has CIDR [10.244.2.0/24] 
	I1018 12:52:05.001588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:52:05.001625       1 main.go:301] handling current node
	I1018 12:52:05.001641       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:52:05.001647       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:52:05.001842       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:52:05.001855       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:52:05.001923       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 12:52:05.001934       1 main.go:324] Node ha-904693-m05 has CIDR [10.244.2.0/24] 
	I1018 12:52:14.996530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:52:14.996573       1 main.go:301] handling current node
	I1018 12:52:14.996590       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:52:14.996595       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:52:14.996800       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:52:14.996812       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:52:14.996872       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 12:52:14.996883       1 main.go:324] Node ha-904693-m05 has CIDR [10.244.2.0/24] 
	I1018 12:52:24.997329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:52:24.997371       1 main.go:301] handling current node
	I1018 12:52:24.997387       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 12:52:24.997394       1 main.go:324] Node ha-904693-m02 has CIDR [10.244.1.0/24] 
	I1018 12:52:24.997521       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 12:52:24.997531       1 main.go:324] Node ha-904693-m04 has CIDR [10.244.3.0/24] 
	I1018 12:52:24.997582       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 12:52:24.997587       1 main.go:324] Node ha-904693-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f3e12646a28acaf33acb91c449640e2b7c2e1b51a07fda1222a124108fa3a60d] <==
	{"level":"warn","ts":"2025-10-18T12:46:18.148825Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40026672c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148839Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015fd0e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148853Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400202ed20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148867Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40011f43c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148880Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002666960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148896Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd2780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.148670Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002ce03c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.151438Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002174960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.151912Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40015fc5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155109Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155205Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40018a8d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155239Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a325a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155306Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002666960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.155314Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400141f4a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160120Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40027dc780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160123Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002dd30e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160241Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001bb0f00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-10-18T12:46:18.160569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000ed9860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I1018 12:46:33.558140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:46:36.235887       1 controller.go:667] quota admission added evaluator for: endpoints
	W1018 12:46:47.238564       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1018 12:46:47.262716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:47:10.772461       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:47:11.078494       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:47:11.124245       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8] <==
	I1018 12:46:09.683548       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:46:10.407864       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:46:10.407894       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:46:10.409427       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:46:10.409610       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:46:10.409861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:46:10.409969       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:46:22.428900       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d0b92a674c67cc0bc4ee48508f01d9282e112f6bb12126b73c27cd760d89c22a] <==
	E1018 12:47:30.656996       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657450       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657482       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657489       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657495       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	E1018 12:47:50.657505       1 gc_controller.go:151] "Failed to get node" err="node \"ha-904693-m03\" not found" logger="pod-garbage-collector-controller" node="ha-904693-m03"
	I1018 12:47:50.671214       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-904693-m03"
	I1018 12:47:50.721328       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-904693-m03"
	I1018 12:47:50.721365       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-j75n6"
	I1018 12:47:50.760722       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-j75n6"
	I1018 12:47:50.760993       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-904693-m03"
	I1018 12:47:50.808228       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-904693-m03"
	I1018 12:47:50.808276       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bckwd"
	I1018 12:47:50.847148       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bckwd"
	I1018 12:47:50.847260       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-904693-m03"
	I1018 12:47:50.881140       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-904693-m03"
	I1018 12:47:50.881190       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-904693-m03"
	I1018 12:47:50.922459       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-904693-m03"
	I1018 12:47:50.922494       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-904693-m03"
	I1018 12:47:50.962354       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-904693-m03"
	I1018 12:51:38.151849       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-904693-m04"
	I1018 12:51:38.151941       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-904693-m05\" does not exist"
	I1018 12:51:38.189024       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-904693-m05" podCIDRs=["10.244.2.0/24"]
	I1018 12:51:40.666897       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-904693-m05"
	I1018 12:52:23.591948       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-904693-m04"
	
	
	==> kube-proxy [664bc261a20461615c227d76978fcabbc9c19e3de0de14724a6fb0f9bbcb8676] <==
	E1018 12:45:30.503531       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": http2: client connection lost"
	I1018 12:45:30.503572       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1018 12:45:34.448156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:34.448255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:34.448188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:34.448343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:37.516021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:37.516021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:37.516161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:37.516292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:43.916094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:45:43.916094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:43.916214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:43.916222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:43.916267       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:45:54.700040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:45:54.700039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:45:54.700156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:45:54.700208       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:45:54.700282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:46:09.964095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 12:46:09.964311       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1018 12:46:13.036094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-904693&resourceVersion=2352\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:46:13.036094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2344\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 12:46:16.108095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2345\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-scheduler [10798af55ae16ce657fb223cc3b8e580322135ff7246e162207a86ef8e91e5de] <==
	E1018 12:44:43.500972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:44:43.501347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:44:43.502109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:44:43.501648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:44:43.501688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:44:43.501702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:44:43.502170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:44:43.501546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:44:43.502196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 12:44:44.986536       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 12:51:38.332888       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jsj6h\": pod kindnet-jsj6h is already assigned to node \"ha-904693-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-jsj6h" node="ha-904693-m05"
	E1018 12:51:38.332978       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b1ce123c-6c1d-4be6-a6a2-7f436c2c83c8(kube-system/kindnet-jsj6h) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-jsj6h"
	E1018 12:51:38.333017       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jsj6h\": pod kindnet-jsj6h is already assigned to node \"ha-904693-m05\"" logger="UnhandledError" pod="kube-system/kindnet-jsj6h"
	E1018 12:51:38.333072       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7rlqn\": pod kube-proxy-7rlqn is already assigned to node \"ha-904693-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7rlqn" node="ha-904693-m05"
	E1018 12:51:38.333107       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 5a3b94dc-4471-400f-94ed-f4781fecfe78(kube-system/kube-proxy-7rlqn) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-7rlqn"
	I1018 12:51:38.335942       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jsj6h" node="ha-904693-m05"
	E1018 12:51:38.335858       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7rlqn\": pod kube-proxy-7rlqn is already assigned to node \"ha-904693-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-7rlqn"
	I1018 12:51:38.336006       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7rlqn" node="ha-904693-m05"
	E1018 12:51:38.423726       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x2cm7\": pod kindnet-x2cm7 is already assigned to node \"ha-904693-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-x2cm7" node="ha-904693-m05"
	E1018 12:51:38.424573       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x2cm7\": pod kindnet-x2cm7 is already assigned to node \"ha-904693-m05\"" logger="UnhandledError" pod="kube-system/kindnet-x2cm7"
	I1018 12:51:38.424679       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x2cm7" node="ha-904693-m05"
	E1018 12:51:38.466878       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sl7lv\": pod kube-proxy-sl7lv is already assigned to node \"ha-904693-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sl7lv" node="ha-904693-m05"
	E1018 12:51:38.468824       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod ce429bc9-9b87-458a-9b8e-d9ff18f86ae4(kube-system/kube-proxy-sl7lv) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-sl7lv"
	E1018 12:51:38.475580       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sl7lv\": pod kube-proxy-sl7lv is already assigned to node \"ha-904693-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-sl7lv"
	I1018 12:51:38.478103       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sl7lv" node="ha-904693-m05"
	
	
	==> kubelet <==
	Oct 18 12:45:47 ha-904693 kubelet[799]: I1018 12:45:47.150859     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:45:47 ha-904693 kubelet[799]: E1018 12:45:47.151001     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:45:51 ha-904693 kubelet[799]: E1018 12:45:51.687581     799 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-ha-904693)" podUID="d3c5d3145f312260295e29de6ab47ebb" pod="kube-system/kube-controller-manager-ha-904693"
	Oct 18 12:45:52 ha-904693 kubelet[799]: E1018 12:45:52.693374     799 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-904693.186f96842d53c593  default   2360 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-904693,UID:ha-904693,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-904693 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-904693,},FirstTimestamp:2025-10-18 12:44:33 +0000 UTC,LastTimestamp:2025-10-18 12:44:33.857399662 +0000 UTC m=+0.292657332,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-904693,}"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.122804     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-904693?timeout=10s\": context deadline exceeded" interval="200ms"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.698869     799 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-904693\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-904693?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 18 12:45:54 ha-904693 kubelet[799]: E1018 12:45:54.699164     799 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Oct 18 12:45:55 ha-904693 kubelet[799]: I1018 12:45:55.781460     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:45:55 ha-904693 kubelet[799]: E1018 12:45:55.781682     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:04 ha-904693 kubelet[799]: E1018 12:46:04.324141     799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-904693?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Oct 18 12:46:08 ha-904693 kubelet[799]: I1018 12:46:08.756735     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:46:14 ha-904693 kubelet[799]: E1018 12:46:14.725537     799 request.go:1196] "Unexpected error when reading response body" err="net/http: request canceled (Client.Timeout or context cancellation while reading body)"
	Oct 18 12:46:14 ha-904693 kubelet[799]: E1018 12:46:14.725613     799 controller.go:145] "Failed to ensure lease exists, will retry" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" interval="800ms"
	Oct 18 12:46:23 ha-904693 kubelet[799]: I1018 12:46:23.245175     799 scope.go:117] "RemoveContainer" containerID="6e322e8fd8012d7451b8f609740ce3f029ba37313c1bc22115ba0c35ce997610"
	Oct 18 12:46:23 ha-904693 kubelet[799]: I1018 12:46:23.245500     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:23 ha-904693 kubelet[799]: E1018 12:46:23.245643     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:25 ha-904693 kubelet[799]: I1018 12:46:25.781162     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:25 ha-904693 kubelet[799]: E1018 12:46:25.781843     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:26 ha-904693 kubelet[799]: I1018 12:46:26.573839     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:26 ha-904693 kubelet[799]: E1018 12:46:26.574012     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:39 ha-904693 kubelet[799]: I1018 12:46:39.758543     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:39 ha-904693 kubelet[799]: E1018 12:46:39.758726     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:46:53 ha-904693 kubelet[799]: I1018 12:46:53.756932     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	Oct 18 12:46:53 ha-904693 kubelet[799]: E1018 12:46:53.757548     799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-904693_kube-system(d3c5d3145f312260295e29de6ab47ebb)\"" pod="kube-system/kube-controller-manager-ha-904693" podUID="d3c5d3145f312260295e29de6ab47ebb"
	Oct 18 12:47:07 ha-904693 kubelet[799]: I1018 12:47:07.756671     799 scope.go:117] "RemoveContainer" containerID="6b9ca29a1030f2e300fa09ce8fe5087b5d01e253a371038cc28a28c82dc9c0b8"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-904693 -n ha-904693
helpers_test.go:269: (dbg) Run:  kubectl --context ha-904693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-898560 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-898560 --output=json --user=testUser: exit status 80 (2.437879239s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e7a80949-fe9e-4645-827c-f200823b6859","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-898560 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d41e9204-81ab-4730-9eb3-32173bc2d7ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T12:54:03Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"d497d632-bc4e-4720-bd50-4758bfc613be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-898560 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-898560 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-898560 --output=json --user=testUser: exit status 80 (1.568663984s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"567db0b4-828f-4456-befc-ebd157b50818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-898560 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6057f327-ab86-4ea0-b030-372ca1b52460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T12:54:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"00623448-53bd-48dd-abf3-e37a50d8139f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-898560 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.57s)

                                                
                                    
x
+
TestPause/serial/Pause (8.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-581407 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-581407 --alsologtostderr -v=5: exit status 80 (2.445371145s)

                                                
                                                
-- stdout --
	* Pausing node pause-581407 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:17:50.337367 1001826 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:17:50.339350 1001826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:17:50.339413 1001826 out.go:374] Setting ErrFile to fd 2...
	I1018 13:17:50.339434 1001826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:17:50.339777 1001826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:17:50.340170 1001826 out.go:368] Setting JSON to false
	I1018 13:17:50.340231 1001826 mustload.go:65] Loading cluster: pause-581407
	I1018 13:17:50.340737 1001826 config.go:182] Loaded profile config "pause-581407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:50.350697 1001826 cli_runner.go:164] Run: docker container inspect pause-581407 --format={{.State.Status}}
	I1018 13:17:50.381232 1001826 host.go:66] Checking if "pause-581407" exists ...
	I1018 13:17:50.381553 1001826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:50.475141 1001826 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:17:50.465981373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:50.475854 1001826 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-581407 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 13:17:50.479767 1001826 out.go:179] * Pausing node pause-581407 ... 
	I1018 13:17:50.482761 1001826 host.go:66] Checking if "pause-581407" exists ...
	I1018 13:17:50.483080 1001826 ssh_runner.go:195] Run: systemctl --version
	I1018 13:17:50.483134 1001826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:50.503881 1001826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:50.610838 1001826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:17:50.626991 1001826 pause.go:52] kubelet running: true
	I1018 13:17:50.627067 1001826 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:17:50.874624 1001826 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:17:50.874720 1001826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:17:50.966075 1001826 cri.go:89] found id: "3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441"
	I1018 13:17:50.966148 1001826 cri.go:89] found id: "99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71"
	I1018 13:17:50.966169 1001826 cri.go:89] found id: "db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715"
	I1018 13:17:50.966190 1001826 cri.go:89] found id: "cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6"
	I1018 13:17:50.966229 1001826 cri.go:89] found id: "1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a"
	I1018 13:17:50.966253 1001826 cri.go:89] found id: "32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20"
	I1018 13:17:50.966273 1001826 cri.go:89] found id: "1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1"
	I1018 13:17:50.966311 1001826 cri.go:89] found id: "70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b"
	I1018 13:17:50.966334 1001826 cri.go:89] found id: "964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	I1018 13:17:50.966357 1001826 cri.go:89] found id: "dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8"
	I1018 13:17:50.966390 1001826 cri.go:89] found id: "69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0"
	I1018 13:17:50.966410 1001826 cri.go:89] found id: "a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1"
	I1018 13:17:50.966429 1001826 cri.go:89] found id: "7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043"
	I1018 13:17:50.966470 1001826 cri.go:89] found id: "4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680"
	I1018 13:17:50.966493 1001826 cri.go:89] found id: ""
	I1018 13:17:50.966590 1001826 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:17:50.979984 1001826 retry.go:31] will retry after 370.769843ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:17:51.351624 1001826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:17:51.366423 1001826 pause.go:52] kubelet running: false
	I1018 13:17:51.366540 1001826 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:17:51.586549 1001826 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:17:51.586697 1001826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:17:51.744859 1001826 cri.go:89] found id: "3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441"
	I1018 13:17:51.744941 1001826 cri.go:89] found id: "99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71"
	I1018 13:17:51.744961 1001826 cri.go:89] found id: "db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715"
	I1018 13:17:51.744982 1001826 cri.go:89] found id: "cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6"
	I1018 13:17:51.745015 1001826 cri.go:89] found id: "1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a"
	I1018 13:17:51.745039 1001826 cri.go:89] found id: "32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20"
	I1018 13:17:51.745059 1001826 cri.go:89] found id: "1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1"
	I1018 13:17:51.745079 1001826 cri.go:89] found id: "70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b"
	I1018 13:17:51.745114 1001826 cri.go:89] found id: "964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	I1018 13:17:51.745137 1001826 cri.go:89] found id: "dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8"
	I1018 13:17:51.745157 1001826 cri.go:89] found id: "69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0"
	I1018 13:17:51.745190 1001826 cri.go:89] found id: "a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1"
	I1018 13:17:51.745208 1001826 cri.go:89] found id: "7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043"
	I1018 13:17:51.745226 1001826 cri.go:89] found id: "4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680"
	I1018 13:17:51.745258 1001826 cri.go:89] found id: ""
	I1018 13:17:51.745349 1001826 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:17:51.761629 1001826 retry.go:31] will retry after 555.367279ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:51Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:17:52.317295 1001826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:17:52.334298 1001826 pause.go:52] kubelet running: false
	I1018 13:17:52.334467 1001826 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:17:52.533379 1001826 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:17:52.533560 1001826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:17:52.662255 1001826 cri.go:89] found id: "3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441"
	I1018 13:17:52.662327 1001826 cri.go:89] found id: "99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71"
	I1018 13:17:52.662347 1001826 cri.go:89] found id: "db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715"
	I1018 13:17:52.662367 1001826 cri.go:89] found id: "cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6"
	I1018 13:17:52.662404 1001826 cri.go:89] found id: "1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a"
	I1018 13:17:52.662426 1001826 cri.go:89] found id: "32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20"
	I1018 13:17:52.662445 1001826 cri.go:89] found id: "1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1"
	I1018 13:17:52.662463 1001826 cri.go:89] found id: "70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b"
	I1018 13:17:52.662491 1001826 cri.go:89] found id: "964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	I1018 13:17:52.662516 1001826 cri.go:89] found id: "dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8"
	I1018 13:17:52.662535 1001826 cri.go:89] found id: "69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0"
	I1018 13:17:52.662554 1001826 cri.go:89] found id: "a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1"
	I1018 13:17:52.662583 1001826 cri.go:89] found id: "7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043"
	I1018 13:17:52.662616 1001826 cri.go:89] found id: "4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680"
	I1018 13:17:52.662633 1001826 cri.go:89] found id: ""
	I1018 13:17:52.662725 1001826 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:17:52.682295 1001826 out.go:203] 
	W1018 13:17:52.685329 1001826 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 13:17:52.685397 1001826 out.go:285] * 
	* 
	W1018 13:17:52.693158 1001826 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 13:17:52.696221 1001826 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-581407 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-581407
helpers_test.go:243: (dbg) docker inspect pause-581407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96",
	        "Created": "2025-10-18T13:16:13.554464597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:16:13.620193479Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/hostname",
	        "HostsPath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/hosts",
	        "LogPath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96-json.log",
	        "Name": "/pause-581407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-581407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-581407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96",
	                "LowerDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-581407",
	                "Source": "/var/lib/docker/volumes/pause-581407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-581407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-581407",
	                "name.minikube.sigs.k8s.io": "pause-581407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a9033d4eca67c42d4bdab5224d490c08a5e00bde86ddce199d05e81e44ec6b3",
	            "SandboxKey": "/var/run/docker/netns/2a9033d4eca6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34136"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34134"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34135"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-581407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:5e:07:89:80:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "442df33bc12daff735b4003b303b115650e1303690b53f79fadf60e934b85454",
	                    "EndpointID": "2cf927cb00d2fe121445cb2da95fdb8161fe67cba1cfccc07e5169ef07cff410",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-581407",
	                        "0287d46aaf28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-581407 -n pause-581407
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-581407 -n pause-581407: exit status 2 (458.683406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-581407 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-581407 logs -n 25: (2.020880128s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:11 UTC │ 18 Oct 25 13:13 UTC │
	│ start   │ -p missing-upgrade-972770 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-972770    │ jenkins │ v1.37.0 │ 18 Oct 25 13:11 UTC │ 18 Oct 25 13:12 UTC │
	│ delete  │ -p missing-upgrade-972770                                                                                                                │ missing-upgrade-972770    │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:12 UTC │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:12 UTC │
	│ stop    │ -p kubernetes-upgrade-022190                                                                                                             │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:12 UTC │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:17 UTC │
	│ delete  │ -p NoKubernetes-166782                                                                                                                   │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:13 UTC │
	│ start   │ -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:13 UTC │
	│ ssh     │ -p NoKubernetes-166782 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │                     │
	│ stop    │ -p NoKubernetes-166782                                                                                                                   │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:13 UTC │
	│ start   │ -p NoKubernetes-166782 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:14 UTC │
	│ ssh     │ -p NoKubernetes-166782 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:14 UTC │                     │
	│ delete  │ -p NoKubernetes-166782                                                                                                                   │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:14 UTC │
	│ start   │ -p stopped-upgrade-311504 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-311504    │ jenkins │ v1.32.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:14 UTC │
	│ stop    │ stopped-upgrade-311504 stop                                                                                                              │ stopped-upgrade-311504    │ jenkins │ v1.32.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:14 UTC │
	│ start   │ -p stopped-upgrade-311504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-311504    │ jenkins │ v1.37.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:15 UTC │
	│ delete  │ -p stopped-upgrade-311504                                                                                                                │ stopped-upgrade-311504    │ jenkins │ v1.37.0 │ 18 Oct 25 13:15 UTC │ 18 Oct 25 13:15 UTC │
	│ start   │ -p running-upgrade-273873 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-273873    │ jenkins │ v1.32.0 │ 18 Oct 25 13:15 UTC │ 18 Oct 25 13:15 UTC │
	│ start   │ -p running-upgrade-273873 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-273873    │ jenkins │ v1.37.0 │ 18 Oct 25 13:15 UTC │ 18 Oct 25 13:16 UTC │
	│ delete  │ -p running-upgrade-273873                                                                                                                │ running-upgrade-273873    │ jenkins │ v1.37.0 │ 18 Oct 25 13:16 UTC │ 18 Oct 25 13:16 UTC │
	│ start   │ -p pause-581407 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-581407              │ jenkins │ v1.37.0 │ 18 Oct 25 13:16 UTC │ 18 Oct 25 13:17 UTC │
	│ start   │ -p pause-581407 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-581407              │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │ 18 Oct 25 13:17 UTC │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │                     │
	│ pause   │ -p pause-581407 --alsologtostderr -v=5                                                                                                   │ pause-581407              │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:17:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:17:26.712562  999649 out.go:179] * Using the docker driver based on existing profile
	I1018 13:17:26.710500  999679 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:17:26.710657  999679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:17:26.710666  999679 out.go:374] Setting ErrFile to fd 2...
	I1018 13:17:26.710671  999679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:17:26.710944  999679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:17:26.711798  999679 out.go:368] Setting JSON to false
	I1018 13:17:26.712876  999679 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17999,"bootTime":1760775448,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:17:26.712970  999679 start.go:141] virtualization:  
	I1018 13:17:26.716253  999679 out.go:179] * [kubernetes-upgrade-022190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:17:26.719292  999679 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:17:26.719384  999679 notify.go:220] Checking for updates...
	I1018 13:17:26.725376  999679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:17:26.728502  999679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:17:26.731313  999679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:17:26.734365  999679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:17:26.737155  999679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:17:26.716349  999649 start.go:305] selected driver: docker
	I1018 13:17:26.716369  999649 start.go:925] validating driver "docker" against &{Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:26.716500  999649 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:17:26.716604  999649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:26.811858  999649 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 13:17:26.800241862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:26.812272  999649 cni.go:84] Creating CNI manager for ""
	I1018 13:17:26.812334  999649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:26.812375  999649 start.go:349] cluster config:
	{Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:26.815807  999649 out.go:179] * Starting "pause-581407" primary control-plane node in "pause-581407" cluster
	I1018 13:17:26.818780  999649 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:17:26.821783  999649 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:17:26.743275  999679 config.go:182] Loaded profile config "kubernetes-upgrade-022190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:26.743910  999679 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:17:26.816802  999679 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:17:26.816932  999679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:26.903375  999679 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:17:26.89148576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:26.903471  999679 docker.go:318] overlay module found
	I1018 13:17:26.906839  999679 out.go:179] * Using the docker driver based on existing profile
	I1018 13:17:26.824725  999649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:26.824785  999649 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:17:26.824795  999649 cache.go:58] Caching tarball of preloaded images
	I1018 13:17:26.824885  999649 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:17:26.824895  999649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:17:26.825056  999649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/config.json ...
	I1018 13:17:26.825288  999649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:17:26.856324  999649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:17:26.856347  999649 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:17:26.856362  999649 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:17:26.856392  999649 start.go:360] acquireMachinesLock for pause-581407: {Name:mk4d6dae8637ceaf27b6457e0697449ed109c7f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:17:26.856461  999649 start.go:364] duration metric: took 37.432µs to acquireMachinesLock for "pause-581407"
	I1018 13:17:26.856486  999649 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:17:26.856494  999649 fix.go:54] fixHost starting: 
	I1018 13:17:26.856758  999649 cli_runner.go:164] Run: docker container inspect pause-581407 --format={{.State.Status}}
	I1018 13:17:26.909485  999649 fix.go:112] recreateIfNeeded on pause-581407: state=Running err=<nil>
	W1018 13:17:26.909516  999649 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:17:26.910108  999679 start.go:305] selected driver: docker
	I1018 13:17:26.910126  999679 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:26.910201  999679 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:17:26.910997  999679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:27.005622  999679 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:17:26.990112413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:27.006004  999679 cni.go:84] Creating CNI manager for ""
	I1018 13:17:27.006068  999679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:27.006111  999679 start.go:349] cluster config:
	{Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:27.009341  999679 out.go:179] * Starting "kubernetes-upgrade-022190" primary control-plane node in "kubernetes-upgrade-022190" cluster
	I1018 13:17:27.012107  999679 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:17:27.015362  999679 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:17:27.018313  999679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:27.018385  999679 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:17:27.018411  999679 cache.go:58] Caching tarball of preloaded images
	I1018 13:17:27.018493  999679 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:17:27.018507  999679 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:17:27.018611  999679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/config.json ...
	I1018 13:17:27.018844  999679 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:17:27.044022  999679 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:17:27.044045  999679 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:17:27.044062  999679 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:17:27.044091  999679 start.go:360] acquireMachinesLock for kubernetes-upgrade-022190: {Name:mkdab1493b0fc19844757773d6aecef6d7580418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:17:27.044197  999679 start.go:364] duration metric: took 71.262µs to acquireMachinesLock for "kubernetes-upgrade-022190"
	I1018 13:17:27.044223  999679 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:17:27.044233  999679 fix.go:54] fixHost starting: 
	I1018 13:17:27.044506  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:27.077811  999679 fix.go:112] recreateIfNeeded on kubernetes-upgrade-022190: state=Running err=<nil>
	W1018 13:17:27.077839  999679 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:17:27.081427  999679 out.go:252] * Updating the running docker "kubernetes-upgrade-022190" container ...
	I1018 13:17:27.081471  999679 machine.go:93] provisionDockerMachine start ...
	I1018 13:17:27.081548  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:27.101747  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.102077  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:27.102093  999679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:17:27.299345  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-022190
	
	I1018 13:17:27.299371  999679 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-022190"
	I1018 13:17:27.299447  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:27.317679  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.318000  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:27.318018  999679 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-022190 && echo "kubernetes-upgrade-022190" | sudo tee /etc/hostname
	I1018 13:17:27.518763  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-022190
	
	I1018 13:17:27.518845  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:27.540800  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.541119  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:27.541138  999679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-022190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-022190/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-022190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:17:27.720297  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:17:27.720323  999679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:17:27.720345  999679 ubuntu.go:190] setting up certificates
	I1018 13:17:27.720355  999679 provision.go:84] configureAuth start
	I1018 13:17:27.720414  999679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-022190
	I1018 13:17:27.760562  999679 provision.go:143] copyHostCerts
	I1018 13:17:27.760642  999679 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:17:27.760660  999679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:17:27.760722  999679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:17:27.760917  999679 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:17:27.760927  999679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:17:27.760956  999679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:17:27.761042  999679 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:17:27.761048  999679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:17:27.761072  999679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:17:27.761128  999679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-022190 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-022190 localhost minikube]
	I1018 13:17:28.052088  999679 provision.go:177] copyRemoteCerts
	I1018 13:17:28.052185  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:17:28.052236  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:28.078375  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:28.208991  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:17:28.234860  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:17:28.287283  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 13:17:28.337750  999679 provision.go:87] duration metric: took 617.371275ms to configureAuth
	I1018 13:17:28.337821  999679 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:17:28.338061  999679 config.go:182] Loaded profile config "kubernetes-upgrade-022190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:28.338231  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:28.360945  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:28.361341  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:28.361361  999679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:17:29.071822  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:17:29.071842  999679 machine.go:96] duration metric: took 1.990361999s to provisionDockerMachine
	I1018 13:17:29.071853  999679 start.go:293] postStartSetup for "kubernetes-upgrade-022190" (driver="docker")
	I1018 13:17:29.071865  999679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:17:29.071929  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:17:29.071987  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.090215  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.195841  999679 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:17:29.199183  999679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:17:29.199208  999679 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:17:29.199219  999679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:17:29.199272  999679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:17:29.199355  999679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:17:29.199474  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:17:29.207011  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:29.224867  999679 start.go:296] duration metric: took 152.998626ms for postStartSetup
	I1018 13:17:29.224968  999679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:17:29.225024  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.243011  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.349325  999679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:17:29.354264  999679 fix.go:56] duration metric: took 2.310023064s for fixHost
	I1018 13:17:29.354289  999679 start.go:83] releasing machines lock for "kubernetes-upgrade-022190", held for 2.310077464s
	I1018 13:17:29.354360  999679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-022190
	I1018 13:17:29.372124  999679 ssh_runner.go:195] Run: cat /version.json
	I1018 13:17:29.372198  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.372441  999679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:17:29.372498  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.415699  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.419834  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.561032  999679 ssh_runner.go:195] Run: systemctl --version
	I1018 13:17:29.694127  999679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:17:29.779610  999679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:17:29.789301  999679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:17:29.789428  999679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:17:29.801131  999679 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:17:29.801205  999679 start.go:495] detecting cgroup driver to use...
	I1018 13:17:29.801260  999679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:17:29.801335  999679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:17:29.825333  999679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:17:29.843937  999679 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:17:29.844053  999679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:17:29.866893  999679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:17:29.894991  999679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:17:30.084607  999679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:17:30.337666  999679 docker.go:234] disabling docker service ...
	I1018 13:17:30.337791  999679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:17:30.354457  999679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:17:30.378611  999679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:17:30.587007  999679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:17:30.789289  999679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:17:30.807224  999679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:17:30.827795  999679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:17:30.827884  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.844853  999679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:17:30.844954  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.858975  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.869151  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.882096  999679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:17:30.901596  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.911074  999679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.919702  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.933433  999679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:17:30.952187  999679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:17:30.968237  999679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:31.175065  999679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:17:31.383196  999679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:17:31.383296  999679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:17:31.387285  999679 start.go:563] Will wait 60s for crictl version
	I1018 13:17:31.387402  999679 ssh_runner.go:195] Run: which crictl
	I1018 13:17:31.391537  999679 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:17:31.420540  999679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:17:31.420640  999679 ssh_runner.go:195] Run: crio --version
	I1018 13:17:31.459159  999679 ssh_runner.go:195] Run: crio --version
	I1018 13:17:31.504123  999679 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:17:26.912476  999649 out.go:252] * Updating the running docker "pause-581407" container ...
	I1018 13:17:26.912508  999649 machine.go:93] provisionDockerMachine start ...
	I1018 13:17:26.912584  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:26.932233  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:26.933622  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:26.933701  999649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:17:27.131346  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-581407
	
	I1018 13:17:27.131366  999649 ubuntu.go:182] provisioning hostname "pause-581407"
	I1018 13:17:27.131435  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:27.163897  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.164213  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:27.164231  999649 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-581407 && echo "pause-581407" | sudo tee /etc/hostname
	I1018 13:17:27.347665  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-581407
	
	I1018 13:17:27.347757  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:27.380116  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.380432  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:27.380449  999649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-581407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-581407/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-581407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:17:27.560122  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:17:27.560156  999649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:17:27.560183  999649 ubuntu.go:190] setting up certificates
	I1018 13:17:27.560193  999649 provision.go:84] configureAuth start
	I1018 13:17:27.560257  999649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-581407
	I1018 13:17:27.582877  999649 provision.go:143] copyHostCerts
	I1018 13:17:27.582943  999649 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:17:27.582961  999649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:17:27.583040  999649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:17:27.583133  999649 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:17:27.583139  999649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:17:27.583163  999649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:17:27.583213  999649 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:17:27.583218  999649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:17:27.583245  999649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:17:27.583295  999649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.pause-581407 san=[127.0.0.1 192.168.76.2 localhost minikube pause-581407]
	I1018 13:17:27.784578  999649 provision.go:177] copyRemoteCerts
	I1018 13:17:27.784672  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:17:27.784742  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:27.809286  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:27.929570  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:17:27.954569  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 13:17:27.976720  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:17:28.013992  999649 provision.go:87] duration metric: took 453.773931ms to configureAuth
	I1018 13:17:28.014018  999649 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:17:28.014257  999649 config.go:182] Loaded profile config "pause-581407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:28.014374  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:28.035788  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:28.036099  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:28.036115  999649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:17:31.508135  999679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022190 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:17:31.525338  999679 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:17:31.529384  999679 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:17:31.529503  999679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:31.529554  999679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:31.562491  999679 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:31.562513  999679 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:17:31.562576  999679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:31.594231  999679 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:31.594251  999679 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:17:31.594258  999679 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 13:17:31.594360  999679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-022190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:17:31.594442  999679 ssh_runner.go:195] Run: crio config
	I1018 13:17:31.661758  999679 cni.go:84] Creating CNI manager for ""
	I1018 13:17:31.661781  999679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:31.661804  999679 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:17:31.661831  999679 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-022190 NodeName:kubernetes-upgrade-022190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:17:31.661969  999679 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-022190"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:17:31.662055  999679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:17:31.670293  999679 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:17:31.670418  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:17:31.678246  999679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1018 13:17:31.692158  999679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:17:31.706657  999679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1018 13:17:31.720132  999679 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:17:31.724282  999679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:31.847867  999679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:31.863181  999679 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190 for IP: 192.168.85.2
	I1018 13:17:31.863206  999679 certs.go:195] generating shared ca certs ...
	I1018 13:17:31.863222  999679 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:31.863369  999679 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:17:31.863417  999679 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:17:31.863428  999679 certs.go:257] generating profile certs ...
	I1018 13:17:31.863508  999679 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.key
	I1018 13:17:31.863576  999679 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/apiserver.key.69ca1e2d
	I1018 13:17:31.863620  999679 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/proxy-client.key
	I1018 13:17:31.863785  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:17:31.863841  999679 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:17:31.863858  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:17:31.863887  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:17:31.863914  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:17:31.863940  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:17:31.863984  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:31.864650  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:17:31.884265  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:17:31.902721  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:17:31.922291  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:17:31.941248  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 13:17:31.959904  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:17:31.978083  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:17:31.995144  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:17:32.017508  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:17:32.036936  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:17:32.055792  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:17:32.074252  999679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:17:32.087743  999679 ssh_runner.go:195] Run: openssl version
	I1018 13:17:32.094373  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:17:32.103068  999679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:17:32.107038  999679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:17:32.107110  999679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:17:32.148569  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:17:32.156610  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:17:32.165169  999679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:32.169161  999679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:32.169247  999679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:32.211640  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:17:32.219750  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:17:32.228155  999679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:17:32.233145  999679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:17:32.233219  999679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:17:32.274349  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:17:32.282453  999679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:17:32.286239  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:17:32.328584  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:17:32.369817  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:17:32.412737  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:17:32.454600  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:17:32.497936  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:17:32.542003  999679 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:32.542088  999679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:17:32.542152  999679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:17:32.575327  999679 cri.go:89] found id: "8e3f2864e7a88e5d18135c4a49b9a8d0bfe0d1970d8fd27361631de16677bd02"
	I1018 13:17:32.575348  999679 cri.go:89] found id: "65ebe734654c08110bab37ac69645aa818529163d0c54b77fdfa6a2d365dc9da"
	I1018 13:17:32.575354  999679 cri.go:89] found id: "dfc2c882414c80809287a665f372e6f4df67ef4083d36c10fe38f67360817634"
	I1018 13:17:32.575361  999679 cri.go:89] found id: "74d006fb029845a9437436f6107c51cac3db1f7c909ed6ef8629e15f2a2b7e6f"
	I1018 13:17:32.575365  999679 cri.go:89] found id: ""
	I1018 13:17:32.575428  999679 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:17:32.586239  999679 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:17:32.586320  999679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:17:32.593683  999679 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:17:32.593703  999679 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:17:32.593767  999679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:17:32.600869  999679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:17:32.601592  999679 kubeconfig.go:125] found "kubernetes-upgrade-022190" server: "https://192.168.85.2:8443"
	I1018 13:17:32.602434  999679 kapi.go:59] client config for kubernetes-upgrade-022190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:32.602928  999679 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 13:17:32.602945  999679 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 13:17:32.602952  999679 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 13:17:32.602957  999679 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 13:17:32.602967  999679 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 13:17:32.603302  999679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:17:32.610702  999679 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 13:17:32.610778  999679 kubeadm.go:601] duration metric: took 17.068126ms to restartPrimaryControlPlane
	I1018 13:17:32.610794  999679 kubeadm.go:402] duration metric: took 68.807535ms to StartCluster
	I1018 13:17:32.610810  999679 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:32.610893  999679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:17:32.611886  999679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:32.612129  999679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:17:32.612359  999679 config.go:182] Loaded profile config "kubernetes-upgrade-022190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:32.612425  999679 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:17:32.612611  999679 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-022190"
	I1018 13:17:32.612635  999679 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-022190"
	W1018 13:17:32.612644  999679 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:17:32.612685  999679 host.go:66] Checking if "kubernetes-upgrade-022190" exists ...
	I1018 13:17:32.612825  999679 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-022190"
	I1018 13:17:32.612863  999679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-022190"
	I1018 13:17:32.613120  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:32.613285  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:32.618346  999679 out.go:179] * Verifying Kubernetes components...
	I1018 13:17:32.621084  999679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:32.647774  999679 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:17:33.474908  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:17:33.474930  999649 machine.go:96] duration metric: took 6.562414337s to provisionDockerMachine
	I1018 13:17:33.474940  999649 start.go:293] postStartSetup for "pause-581407" (driver="docker")
	I1018 13:17:33.474951  999649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:17:33.475012  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:17:33.475050  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.501382  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:33.612067  999649 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:17:33.615695  999649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:17:33.615726  999649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:17:33.615743  999649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:17:33.615800  999649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:17:33.615907  999649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:17:33.616020  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:17:33.623890  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:33.642468  999649 start.go:296] duration metric: took 167.512703ms for postStartSetup
	I1018 13:17:33.642553  999649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:17:33.642614  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.662105  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:33.765115  999649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:17:33.770030  999649 fix.go:56] duration metric: took 6.913529211s for fixHost
	I1018 13:17:33.770056  999649 start.go:83] releasing machines lock for "pause-581407", held for 6.913582003s
	I1018 13:17:33.770125  999649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-581407
	I1018 13:17:33.786800  999649 ssh_runner.go:195] Run: cat /version.json
	I1018 13:17:33.786839  999649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:17:33.786862  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.786904  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.806539  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:33.828176  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:34.011340  999649 ssh_runner.go:195] Run: systemctl --version
	I1018 13:17:34.018370  999649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:17:34.060533  999649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:17:34.065106  999649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:17:34.065185  999649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:17:34.074541  999649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:17:34.074566  999649 start.go:495] detecting cgroup driver to use...
	I1018 13:17:34.074601  999649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:17:34.074654  999649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:17:34.090986  999649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:17:34.107207  999649 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:17:34.107276  999649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:17:34.125299  999649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:17:34.142862  999649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:17:34.300649  999649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:17:34.439868  999649 docker.go:234] disabling docker service ...
	I1018 13:17:34.439956  999649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:17:34.461979  999649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:17:34.479456  999649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:17:34.634387  999649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:17:34.772493  999649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:17:34.788759  999649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:17:34.806459  999649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:17:34.806529  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.815229  999649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:17:34.815300  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.825376  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.836364  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.849757  999649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:17:34.858181  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.871115  999649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.879927  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.889939  999649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:17:34.898070  999649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:17:34.905522  999649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:35.052347  999649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:17:35.227050  999649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:17:35.227175  999649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:17:35.231142  999649 start.go:563] Will wait 60s for crictl version
	I1018 13:17:35.231210  999649 ssh_runner.go:195] Run: which crictl
	I1018 13:17:35.234926  999649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:17:35.262917  999649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:17:35.263066  999649 ssh_runner.go:195] Run: crio --version
	I1018 13:17:35.293648  999649 ssh_runner.go:195] Run: crio --version
	I1018 13:17:35.329621  999649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:17:35.332535  999649 cli_runner.go:164] Run: docker network inspect pause-581407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:17:35.350422  999649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 13:17:35.354702  999649 kubeadm.go:883] updating cluster {Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:17:35.354844  999649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:35.354890  999649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:35.393429  999649 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:35.393454  999649 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:17:35.393512  999649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:35.420435  999649 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:35.420463  999649 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:17:35.420473  999649 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 13:17:35.420582  999649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-581407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:17:35.420674  999649 ssh_runner.go:195] Run: crio config
	I1018 13:17:35.479207  999649 cni.go:84] Creating CNI manager for ""
	I1018 13:17:35.479298  999649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:35.479341  999649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:17:35.479382  999649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-581407 NodeName:pause-581407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:17:35.479547  999649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-581407"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:17:35.479632  999649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:17:35.488967  999649 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:17:35.489036  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:17:35.498172  999649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 13:17:35.512091  999649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:17:35.525306  999649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 13:17:35.538630  999649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:17:35.542538  999649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:35.680863  999649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:35.694659  999649 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407 for IP: 192.168.76.2
	I1018 13:17:35.694679  999649 certs.go:195] generating shared ca certs ...
	I1018 13:17:35.694694  999649 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:35.694919  999649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:17:35.694994  999649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:17:35.695009  999649 certs.go:257] generating profile certs ...
	I1018 13:17:35.695122  999649 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.key
	I1018 13:17:35.695216  999649 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/apiserver.key.4c14d249
	I1018 13:17:35.695290  999649 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/proxy-client.key
	I1018 13:17:35.695424  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:17:35.695477  999649 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:17:35.695494  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:17:35.695519  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:17:35.695584  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:17:35.695617  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:17:35.695743  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:35.696428  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:17:35.717414  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:17:35.735033  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:17:35.753881  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:17:35.771693  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 13:17:35.789182  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 13:17:35.814586  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:17:35.837866  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:17:35.860683  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:17:35.881879  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:17:35.902257  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:17:35.919536  999649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:17:35.933405  999649 ssh_runner.go:195] Run: openssl version
	I1018 13:17:35.939515  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:17:35.948131  999649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:35.952063  999649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:35.952199  999649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:35.993254  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:17:36.002298  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:17:36.014965  999649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:17:36.019261  999649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:17:36.019386  999649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:17:36.061387  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:17:36.069596  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:17:36.078373  999649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:17:36.082207  999649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:17:36.082312  999649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:17:36.123523  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:17:36.131488  999649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:17:36.135379  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:17:36.188563  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:17:36.238957  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:17:36.305353  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:17:36.375871  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:17:36.523382  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:17:32.651038  999679 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:32.651065  999679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:17:32.651145  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:32.656012  999679 kapi.go:59] client config for kubernetes-upgrade-022190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:32.656328  999679 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-022190"
	W1018 13:17:32.656346  999679 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:17:32.656371  999679 host.go:66] Checking if "kubernetes-upgrade-022190" exists ...
	I1018 13:17:32.656800  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:32.701360  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:32.714761  999679 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:32.714783  999679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:17:32.714856  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:32.749553  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:32.837610  999679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:32.846795  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:32.853954  999679 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:17:32.854039  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:32.876593  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:32.945380  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:32.945419  999679 retry.go:31] will retry after 238.421339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 13:17:32.962116  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:32.962150  999679 retry.go:31] will retry after 308.496613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.184524  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:33.253292  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.253325  999679 retry.go:31] will retry after 195.153176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.271506  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:33.345549  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.345578  999679 retry.go:31] will retry after 476.50141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.354853  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:33.449235  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:33.543331  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.543369  999679 retry.go:31] will retry after 559.285396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.822911  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:33.854077  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:33.921464  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.921493  999679 retry.go:31] will retry after 525.012815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.102856  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:34.190672  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.190707  999679 retry.go:31] will retry after 570.237713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.354985  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:34.446730  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:34.538740  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.538773  999679 retry.go:31] will retry after 1.253477765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.761164  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:34.848198  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.848228  999679 retry.go:31] will retry after 1.805738404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.854542  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:35.354261  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:35.792841  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:35.854212  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:35.892777  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:35.892804  999679 retry.go:31] will retry after 1.428518342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:36.354179  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:36.654969  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:36.619316  999649 kubeadm.go:400] StartCluster: {Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:36.619435  999649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:17:36.619498  999649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:17:36.694930  999649 cri.go:89] found id: "3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441"
	I1018 13:17:36.695027  999649 cri.go:89] found id: "99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71"
	I1018 13:17:36.695049  999649 cri.go:89] found id: "db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715"
	I1018 13:17:36.695068  999649 cri.go:89] found id: "cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6"
	I1018 13:17:36.695098  999649 cri.go:89] found id: "1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a"
	I1018 13:17:36.695121  999649 cri.go:89] found id: "32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20"
	I1018 13:17:36.695140  999649 cri.go:89] found id: "1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1"
	I1018 13:17:36.695200  999649 cri.go:89] found id: "70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b"
	I1018 13:17:36.695229  999649 cri.go:89] found id: "964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	I1018 13:17:36.695275  999649 cri.go:89] found id: "dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8"
	I1018 13:17:36.695304  999649 cri.go:89] found id: "69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0"
	I1018 13:17:36.695332  999649 cri.go:89] found id: "a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1"
	I1018 13:17:36.695372  999649 cri.go:89] found id: "7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043"
	I1018 13:17:36.695394  999649 cri.go:89] found id: "4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680"
	I1018 13:17:36.695424  999649 cri.go:89] found id: ""
	I1018 13:17:36.695536  999649 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:17:36.730995  999649 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:36Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:17:36.731158  999649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:17:36.744451  999649 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:17:36.744529  999649 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:17:36.744646  999649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:17:36.755553  999649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:17:36.756405  999649 kubeconfig.go:125] found "pause-581407" server: "https://192.168.76.2:8443"
	I1018 13:17:36.757741  999649 kapi.go:59] client config for pause-581407: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:36.758570  999649 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 13:17:36.758699  999649 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 13:17:36.758735  999649 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 13:17:36.758754  999649 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 13:17:36.758790  999649 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 13:17:36.759264  999649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:17:36.770100  999649 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 13:17:36.770204  999649 kubeadm.go:601] duration metric: took 25.65236ms to restartPrimaryControlPlane
	I1018 13:17:36.770228  999649 kubeadm.go:402] duration metric: took 150.921439ms to StartCluster
	I1018 13:17:36.770279  999649 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:36.770409  999649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:17:36.771578  999649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:36.771997  999649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:17:36.772644  999649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:17:36.772747  999649 config.go:182] Loaded profile config "pause-581407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:36.775857  999649 out.go:179] * Verifying Kubernetes components...
	I1018 13:17:36.779006  999649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:36.779195  999649 out.go:179] * Enabled addons: 
	I1018 13:17:36.782146  999649 addons.go:514] duration metric: took 9.495028ms for enable addons: enabled=[]
	I1018 13:17:37.035639  999649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:37.059765  999649 node_ready.go:35] waiting up to 6m0s for node "pause-581407" to be "Ready" ...
	W1018 13:17:36.807012  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:36.807045  999679 retry.go:31] will retry after 1.780691464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:36.854309  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:37.322268  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:37.354746  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:37.447700  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:37.447731  999679 retry.go:31] will retry after 2.321151428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:37.854148  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:38.354432  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:38.588479  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:38.716322  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:38.716361  999679 retry.go:31] will retry after 1.594670935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:38.854715  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:39.354601  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:39.769638  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:39.855128  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:39.895983  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:39.896071  999679 retry.go:31] will retry after 2.664209537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:40.311225  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:40.354820  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:40.415315  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:40.415411  999679 retry.go:31] will retry after 5.056501708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:40.854682  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:41.354573  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:41.654846  999649 node_ready.go:49] node "pause-581407" is "Ready"
	I1018 13:17:41.654878  999649 node_ready.go:38] duration metric: took 4.595069392s for node "pause-581407" to be "Ready" ...
	I1018 13:17:41.654891  999649 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:17:41.655006  999649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:41.672353  999649 api_server.go:72] duration metric: took 4.900151249s to wait for apiserver process to appear ...
	I1018 13:17:41.672379  999649 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:17:41.672399  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:41.686877  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 13:17:41.686910  999649 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 13:17:42.173530  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:42.187978  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:42.188043  999649 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:17:42.672523  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:42.686764  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:42.686805  999649 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:17:43.173518  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:43.182257  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:17:43.183516  999649 api_server.go:141] control plane version: v1.34.1
	I1018 13:17:43.183543  999649 api_server.go:131] duration metric: took 1.511156868s to wait for apiserver health ...
	I1018 13:17:43.183554  999649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:17:43.187136  999649 system_pods.go:59] 7 kube-system pods found
	I1018 13:17:43.187174  999649 system_pods.go:61] "coredns-66bc5c9577-tzdm5" [9d2e1e6f-52ac-477c-b94f-c5b39e401dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:17:43.187183  999649 system_pods.go:61] "etcd-pause-581407" [e98f9f03-1631-4c2e-ba26-43fd3f26abfd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:17:43.187190  999649 system_pods.go:61] "kindnet-8jjd5" [ce4339d5-6ec9-44ba-891a-207552e6e2d8] Running
	I1018 13:17:43.187198  999649 system_pods.go:61] "kube-apiserver-pause-581407" [757bd729-dacd-4d7f-a80b-b8853434f4f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:17:43.187216  999649 system_pods.go:61] "kube-controller-manager-pause-581407" [bffbe8fa-169d-4e7a-9c06-a73513ea2c20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:17:43.187228  999649 system_pods.go:61] "kube-proxy-4l8qb" [d0d30679-b467-4886-9e01-214192aa7e54] Running
	I1018 13:17:43.187235  999649 system_pods.go:61] "kube-scheduler-pause-581407" [5697fd37-f14d-4489-8d2d-345ae7bbd321] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:17:43.187241  999649 system_pods.go:74] duration metric: took 3.671123ms to wait for pod list to return data ...
	I1018 13:17:43.187252  999649 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:17:43.189989  999649 default_sa.go:45] found service account: "default"
	I1018 13:17:43.190071  999649 default_sa.go:55] duration metric: took 2.811225ms for default service account to be created ...
	I1018 13:17:43.190096  999649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:17:43.196445  999649 system_pods.go:86] 7 kube-system pods found
	I1018 13:17:43.196482  999649 system_pods.go:89] "coredns-66bc5c9577-tzdm5" [9d2e1e6f-52ac-477c-b94f-c5b39e401dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:17:43.196492  999649 system_pods.go:89] "etcd-pause-581407" [e98f9f03-1631-4c2e-ba26-43fd3f26abfd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:17:43.196498  999649 system_pods.go:89] "kindnet-8jjd5" [ce4339d5-6ec9-44ba-891a-207552e6e2d8] Running
	I1018 13:17:43.196505  999649 system_pods.go:89] "kube-apiserver-pause-581407" [757bd729-dacd-4d7f-a80b-b8853434f4f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:17:43.196512  999649 system_pods.go:89] "kube-controller-manager-pause-581407" [bffbe8fa-169d-4e7a-9c06-a73513ea2c20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:17:43.196517  999649 system_pods.go:89] "kube-proxy-4l8qb" [d0d30679-b467-4886-9e01-214192aa7e54] Running
	I1018 13:17:43.196523  999649 system_pods.go:89] "kube-scheduler-pause-581407" [5697fd37-f14d-4489-8d2d-345ae7bbd321] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:17:43.196534  999649 system_pods.go:126] duration metric: took 6.431976ms to wait for k8s-apps to be running ...
	I1018 13:17:43.196547  999649 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:17:43.196608  999649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:17:43.209945  999649 system_svc.go:56] duration metric: took 13.375146ms WaitForService to wait for kubelet
	I1018 13:17:43.209977  999649 kubeadm.go:586] duration metric: took 6.437778963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:17:43.209998  999649 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:17:43.213091  999649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:17:43.213127  999649 node_conditions.go:123] node cpu capacity is 2
	I1018 13:17:43.213141  999649 node_conditions.go:105] duration metric: took 3.136652ms to run NodePressure ...
	I1018 13:17:43.213153  999649 start.go:241] waiting for startup goroutines ...
	I1018 13:17:43.213161  999649 start.go:246] waiting for cluster config update ...
	I1018 13:17:43.213169  999649 start.go:255] writing updated cluster config ...
	I1018 13:17:43.213470  999649 ssh_runner.go:195] Run: rm -f paused
	I1018 13:17:43.217003  999649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:17:43.217691  999649 kapi.go:59] client config for pause-581407: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:43.220847  999649 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzdm5" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 13:17:45.231221  999649 pod_ready.go:104] pod "coredns-66bc5c9577-tzdm5" is not "Ready", error: <nil>
	I1018 13:17:41.855064  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:42.354827  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:42.561223  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:42.648281  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:42.648318  999679 retry.go:31] will retry after 6.153971021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:42.854852  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:43.355078  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:43.854921  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:44.355070  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:44.854172  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:45.354683  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:45.473365  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:45.568339  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:45.568372  999679 retry.go:31] will retry after 5.478216126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:45.854889  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:46.354216  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:47.726752  999649 pod_ready.go:104] pod "coredns-66bc5c9577-tzdm5" is not "Ready", error: <nil>
	I1018 13:17:48.726927  999649 pod_ready.go:94] pod "coredns-66bc5c9577-tzdm5" is "Ready"
	I1018 13:17:48.726957  999649 pod_ready.go:86] duration metric: took 5.506085138s for pod "coredns-66bc5c9577-tzdm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.729816  999649 pod_ready.go:83] waiting for pod "etcd-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.734380  999649 pod_ready.go:94] pod "etcd-pause-581407" is "Ready"
	I1018 13:17:48.734411  999649 pod_ready.go:86] duration metric: took 4.565637ms for pod "etcd-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.736845  999649 pod_ready.go:83] waiting for pod "kube-apiserver-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.741768  999649 pod_ready.go:94] pod "kube-apiserver-pause-581407" is "Ready"
	I1018 13:17:48.741794  999649 pod_ready.go:86] duration metric: took 4.920242ms for pod "kube-apiserver-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.744415  999649 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.925251  999649 pod_ready.go:94] pod "kube-controller-manager-pause-581407" is "Ready"
	I1018 13:17:48.925281  999649 pod_ready.go:86] duration metric: took 180.837856ms for pod "kube-controller-manager-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:49.125619  999649 pod_ready.go:83] waiting for pod "kube-proxy-4l8qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:49.525394  999649 pod_ready.go:94] pod "kube-proxy-4l8qb" is "Ready"
	I1018 13:17:49.525493  999649 pod_ready.go:86] duration metric: took 399.83359ms for pod "kube-proxy-4l8qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:49.725895  999649 pod_ready.go:83] waiting for pod "kube-scheduler-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:50.126818  999649 pod_ready.go:94] pod "kube-scheduler-pause-581407" is "Ready"
	I1018 13:17:50.126897  999649 pod_ready.go:86] duration metric: took 400.925301ms for pod "kube-scheduler-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:50.126926  999649 pod_ready.go:40] duration metric: took 6.909888289s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:17:50.201402  999649 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:17:50.204763  999649 out.go:179] * Done! kubectl is now configured to use "pause-581407" cluster and "default" namespace by default
	I1018 13:17:46.854995  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:47.354216  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:47.854226  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:48.354761  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:48.802531  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:48.855035  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:48.875219  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:48.875249  999679 retry.go:31] will retry after 5.966238101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:49.354887  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:49.854699  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:50.354983  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:50.854686  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:51.047531  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:51.129212  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:51.129251  999679 retry.go:31] will retry after 10.969316836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:51.354705  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:51.388689  999679 api_server.go:72] duration metric: took 18.776516216s to wait for apiserver process to appear ...
	I1018 13:17:51.388718  999679 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:17:51.388747  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:51.389039  999679 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.533104503Z" level=info msg="Created container db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715: kube-system/kube-apiserver-pause-581407/kube-apiserver" id=c9241dfe-f070-4251-8cb5-a09c9f8960a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.53451309Z" level=info msg="Starting container: db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715" id=79548e41-645b-4cdf-9ba7-198d55287f08 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.537353009Z" level=info msg="Started container" PID=2324 containerID=db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715 description=kube-system/kube-apiserver-pause-581407/kube-apiserver id=79548e41-645b-4cdf-9ba7-198d55287f08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=393fcdeafc5fbe5639a7d6449f86ca498be47ff4c823113792874b7633d1fe4d
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.550203973Z" level=info msg="Started container" PID=2315 containerID=cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6 description=kube-system/coredns-66bc5c9577-tzdm5/coredns id=3fa00fb4-2e62-45d7-a067-bf863d439b2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=821c92b817eaaba160d52d9d50fff8a4d8f800204209f435e68d9493b1f6e807
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.57009679Z" level=info msg="Created container 99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71: kube-system/kube-scheduler-pause-581407/kube-scheduler" id=5bc71247-93ce-4069-bcdb-a7592160aedf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.570661751Z" level=info msg="Starting container: 99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71" id=2edec95d-156d-49ee-9814-08330ae84435 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.576162815Z" level=info msg="Created container 3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441: kube-system/kube-proxy-4l8qb/kube-proxy" id=ee970cab-b4fb-4eef-bc12-e2d40569d3a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.578844767Z" level=info msg="Started container" PID=2336 containerID=99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71 description=kube-system/kube-scheduler-pause-581407/kube-scheduler id=2edec95d-156d-49ee-9814-08330ae84435 name=/runtime.v1.RuntimeService/StartContainer sandboxID=218f64315d95613914a0ac1670fb9e5a28a88a1fcedc30bfd9cef6a9a9373b8b
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.57973335Z" level=info msg="Starting container: 3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441" id=65bd4147-401d-4a7d-b071-c36c0bd722c0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.596077498Z" level=info msg="Started container" PID=2332 containerID=3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441 description=kube-system/kube-proxy-4l8qb/kube-proxy id=65bd4147-401d-4a7d-b071-c36c0bd722c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0d94cf0a059bf19a5ddbf6bf6a3021a3c22bd1fb7fcf2f816611f76d75ce97e
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.658825681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.662423613Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.662459527Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.662482592Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.665995699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.666031769Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.666056844Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.669435057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.669475066Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.669498516Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.672750878Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.672797172Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.672820212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.676752326Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.676923839Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	3c02f603c1a8e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   17 seconds ago       Running             kube-proxy                1                   c0d94cf0a059b       kube-proxy-4l8qb                       kube-system
	99118ec2adf04       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago       Running             kube-scheduler            1                   218f64315d956       kube-scheduler-pause-581407            kube-system
	db8db3b35d1a0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago       Running             kube-apiserver            1                   393fcdeafc5fb       kube-apiserver-pause-581407            kube-system
	cf443ebecbf84       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   17 seconds ago       Running             coredns                   1                   821c92b817eaa       coredns-66bc5c9577-tzdm5               kube-system
	1d867435092e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago       Running             etcd                      1                   bb38fa8e9f53b       etcd-pause-581407                      kube-system
	32797f415a8c2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago       Running             kube-controller-manager   1                   94243e5870ff6       kube-controller-manager-pause-581407   kube-system
	1b1355c4f0d44       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   17 seconds ago       Running             kindnet-cni               1                   9d7ebc83e257f       kindnet-8jjd5                          kube-system
	70e591f358e98       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   30 seconds ago       Exited              coredns                   0                   821c92b817eaa       coredns-66bc5c9577-tzdm5               kube-system
	964b1c1291135       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c0d94cf0a059b       kube-proxy-4l8qb                       kube-system
	dadb5dd59eca9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9d7ebc83e257f       kindnet-8jjd5                          kube-system
	69a5d51f6f41f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   218f64315d956       kube-scheduler-pause-581407            kube-system
	a5a7eba2cfa84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bb38fa8e9f53b       etcd-pause-581407                      kube-system
	7adb65405ca12       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   393fcdeafc5fb       kube-apiserver-pause-581407            kube-system
	4d2ab802325be       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   94243e5870ff6       kube-controller-manager-pause-581407   kube-system
	
	
	==> coredns [70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33444 - 12329 "HINFO IN 4171231459783145117.769488335896153306. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032623501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45046 - 37112 "HINFO IN 5890768828181610452.1586750936055456133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019392842s
	
	
	==> describe nodes <==
	Name:               pause-581407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-581407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=pause-581407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_16_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-581407
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:17:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-581407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d9f2dc28-2898-4eca-a2e0-ac219f0a2925
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tzdm5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     72s
	  kube-system                 etcd-pause-581407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         77s
	  kube-system                 kindnet-8jjd5                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-pause-581407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-controller-manager-pause-581407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-proxy-4l8qb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-581407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 71s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node pause-581407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node pause-581407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s (x8 over 86s)  kubelet          Node pause-581407 status is now: NodeHasSufficientPID
	  Normal   Starting                 78s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 78s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  77s                kubelet          Node pause-581407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    77s                kubelet          Node pause-581407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     77s                kubelet          Node pause-581407 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           73s                node-controller  Node pause-581407 event: Registered Node pause-581407 in Controller
	  Normal   NodeReady                32s                kubelet          Node pause-581407 status is now: NodeReady
	  Normal   RegisteredNode           9s                 node-controller  Node pause-581407 event: Registered Node pause-581407 in Controller
	
	
	==> dmesg <==
	[ +36.492252] overlayfs: idmapped layers are currently not supported
	[Oct18 12:43] overlayfs: idmapped layers are currently not supported
	[Oct18 12:44] overlayfs: idmapped layers are currently not supported
	[  +3.556272] overlayfs: idmapped layers are currently not supported
	[Oct18 12:47] overlayfs: idmapped layers are currently not supported
	[Oct18 12:51] overlayfs: idmapped layers are currently not supported
	[Oct18 12:53] overlayfs: idmapped layers are currently not supported
	[Oct18 12:57] overlayfs: idmapped layers are currently not supported
	[Oct18 12:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a] <==
	{"level":"warn","ts":"2025-10-18T13:17:39.509037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.530705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.549232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.566184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.585982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.608408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.627598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.650280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.671945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.719008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.721708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.732270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.750078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.778734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.824437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.880849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.912524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.949685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.011770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.043885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.141513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.162376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.200287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.239140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.339930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	
	
	==> etcd [a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1] <==
	{"level":"warn","ts":"2025-10-18T13:16:32.917418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:32.954014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.034422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.061702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.093053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.130308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.180930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35058","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T13:17:28.296217Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T13:17:28.296270Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-581407","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-18T13:17:28.296368Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T13:17:28.456599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T13:17:28.458171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T13:17:28.458294Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-18T13:17:28.458422Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T13:17:28.458464Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458731Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458797Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T13:17:28.458830Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458899Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458933Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T13:17:28.458968Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T13:17:28.463008Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-18T13:17:28.463183Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T13:17:28.463247Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T13:17:28.463290Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-581407","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 13:17:54 up  5:00,  0 user,  load average: 2.96, 2.44, 1.98
	Linux pause-581407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1] <==
	I1018 13:17:36.479869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:17:36.480252       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:17:36.480381       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:17:36.480393       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:17:36.480406       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:17:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:17:36.716495       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:17:36.716837       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:17:36.716886       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:17:36.717048       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 13:17:41.819341       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:17:41.819477       1 metrics.go:72] Registering metrics
	I1018 13:17:41.819565       1 controller.go:711] "Syncing nftables rules"
	I1018 13:17:46.658470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:17:46.658524       1 main.go:301] handling current node
	
	
	==> kindnet [dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8] <==
	I1018 13:16:42.415427       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:16:42.416819       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:16:42.416997       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:16:42.417041       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:16:42.417089       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:16:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:16:42.633692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:16:42.633776       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:16:42.633812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:16:42.634652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:17:12.634440       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:17:12.634446       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:17:12.634585       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:17:12.634586       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:17:13.934052       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:17:13.934158       1 metrics.go:72] Registering metrics
	I1018 13:17:13.934254       1 controller.go:711] "Syncing nftables rules"
	I1018 13:17:22.639828       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:17:22.639861       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043] <==
	I1018 13:16:34.183721       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:16:34.183741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 13:16:34.183836       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:16:34.203497       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 13:16:34.203606       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:34.236513       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:34.236643       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:16:34.893223       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:16:34.898728       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:16:34.898765       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:16:35.665585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:16:35.717400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:16:35.796951       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:16:35.806531       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 13:16:35.808169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:16:35.813351       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:16:36.064057       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:16:36.851950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:16:36.875962       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:16:36.890352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 13:16:41.807320       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 13:16:41.928982       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:41.943095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:42.108776       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:17:28.279569       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715] <==
	I1018 13:17:41.692923       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 13:17:41.697746       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 13:17:41.698489       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 13:17:41.698641       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 13:17:41.698721       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:17:41.698772       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:17:41.698800       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:17:41.698827       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:17:41.699203       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:17:41.699258       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:17:41.715897       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:17:41.715996       1 policy_source.go:240] refreshing policies
	I1018 13:17:41.716237       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 13:17:41.733671       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:17:41.734440       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 13:17:41.767521       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:17:41.775730       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 13:17:41.776141       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:17:41.776706       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 13:17:42.380057       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:17:43.742008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:17:45.136634       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:17:45.201623       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:17:45.413490       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:17:45.497407       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20] <==
	I1018 13:17:45.127769       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 13:17:45.129973       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 13:17:45.132972       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:17:45.156879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:17:45.164880       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:17:45.171931       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:17:45.172079       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:17:45.172178       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:17:45.174901       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:17:45.175003       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:17:45.172200       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:17:45.172212       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:17:45.172226       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:17:45.172236       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:17:45.179643       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:17:45.180909       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:17:45.180987       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:17:45.181030       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:17:45.181069       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:17:45.179686       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:17:45.187909       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 13:17:45.188461       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:17:45.188533       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 13:17:45.196258       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 13:17:45.200846       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-controller-manager [4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680] <==
	I1018 13:16:41.092212       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:16:41.098019       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:16:41.101372       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 13:16:41.101433       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:16:41.101580       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 13:16:41.101608       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 13:16:41.102119       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:16:41.102267       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 13:16:41.103240       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:16:41.103526       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-581407" podCIDRs=["10.244.0.0/24"]
	I1018 13:16:41.103581       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 13:16:41.104131       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 13:16:41.104165       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:16:41.104273       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:16:41.104517       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:16:41.104988       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:16:41.105019       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:16:41.106206       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:16:41.119518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:16:41.123688       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 13:16:41.124762       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 13:16:41.130032       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:16:41.142180       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 13:16:41.148971       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:17:26.064007       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441] <==
	I1018 13:17:38.390852       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:17:39.507893       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:17:41.770425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:17:41.770467       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:17:41.770528       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:17:41.814095       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:17:41.814216       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:17:41.822926       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:17:41.823358       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:17:41.823448       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:17:41.826122       1 config.go:200] "Starting service config controller"
	I1018 13:17:41.826208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:17:41.826255       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:17:41.826304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:17:41.826347       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:17:41.826403       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:17:41.827489       1 config.go:309] "Starting node config controller"
	I1018 13:17:41.827565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:17:41.827596       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:17:41.926955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:17:41.927007       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:17:41.927029       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361] <==
	I1018 13:16:42.363628       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:16:42.461752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:16:42.568335       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:16:42.568371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:16:42.568436       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:16:42.747541       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:16:42.747590       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:16:42.824357       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:16:42.824709       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:16:42.824728       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:16:42.826236       1 config.go:200] "Starting service config controller"
	I1018 13:16:42.826246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:16:42.826267       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:16:42.826271       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:16:42.826284       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:16:42.826288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:16:42.826918       1 config.go:309] "Starting node config controller"
	I1018 13:16:42.826925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:16:42.826931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:16:42.927395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:16:42.927429       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:16:42.927477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0] <==
	E1018 13:16:34.151004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:16:34.151055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:16:34.151166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:16:34.151225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:16:34.151265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:16:34.151641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:16:34.155971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 13:16:34.982506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 13:16:34.985938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:16:34.987093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 13:16:34.988782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:16:34.996007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 13:16:35.009481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:16:35.082958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:16:35.144464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 13:16:35.223125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:16:35.303674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:16:35.359904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1018 13:16:36.926980       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:28.281154       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 13:17:28.281247       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 13:17:28.281263       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 13:17:28.281276       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:28.281366       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 13:17:28.281382       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71] <==
	I1018 13:17:39.762186       1 serving.go:386] Generated self-signed cert in-memory
	W1018 13:17:41.600187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:17:41.600225       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:17:41.600236       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:17:41.600243       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:17:41.693486       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:17:41.693587       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:17:41.700189       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:17:41.700371       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:41.700428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:41.700469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:17:41.800985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.279777    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b3f1fb1812d4a14a76248251f4c7e63" pod="kube-system/kube-controller-manager-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.280036    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="463d758dafa1340c8fbb795faadc1d16" pod="kube-system/kube-apiserver-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: I1018 13:17:36.327127    1294 scope.go:117] "RemoveContainer" containerID="964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.327924    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-tzdm5\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d2e1e6f-52ac-477c-b94f-c5b39e401dde" pod="kube-system/coredns-66bc5c9577-tzdm5"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.328289    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5240846d13aca7e9185d8b56b2c8d0c0" pod="kube-system/etcd-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.328616    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b3f1fb1812d4a14a76248251f4c7e63" pod="kube-system/kube-controller-manager-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.328916    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="463d758dafa1340c8fbb795faadc1d16" pod="kube-system/kube-apiserver-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.329255    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="8e6dfbfb3deed9f4c9553aa21d451053" pod="kube-system/kube-scheduler-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.329565    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8jjd5\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ce4339d5-6ec9-44ba-891a-207552e6e2d8" pod="kube-system/kindnet-8jjd5"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.330100    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l8qb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d0d30679-b467-4886-9e01-214192aa7e54" pod="kube-system/kube-proxy-4l8qb"
	Oct 18 13:17:37 pause-581407 kubelet[1294]: W1018 13:17:37.199500    1294 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.540824    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-581407\" is forbidden: User \"system:node:pause-581407\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" podUID="6b3f1fb1812d4a14a76248251f4c7e63" pod="kube-system/kube-controller-manager-pause-581407"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.541608    1294 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-581407\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.541746    1294 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-581407\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.541840    1294 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-581407\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.653628    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-581407\" is forbidden: User \"system:node:pause-581407\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" podUID="463d758dafa1340c8fbb795faadc1d16" pod="kube-system/kube-apiserver-pause-581407"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.679309    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-581407\" is forbidden: User \"system:node:pause-581407\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" podUID="8e6dfbfb3deed9f4c9553aa21d451053" pod="kube-system/kube-scheduler-pause-581407"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.696114    1294 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 18 13:17:41 pause-581407 kubelet[1294]:         pods "kindnet-8jjd5" is forbidden: User "system:node:pause-581407" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-581407' and this object
	Oct 18 13:17:41 pause-581407 kubelet[1294]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Oct 18 13:17:41 pause-581407 kubelet[1294]:  > podUID="ce4339d5-6ec9-44ba-891a-207552e6e2d8" pod="kube-system/kindnet-8jjd5"
	Oct 18 13:17:47 pause-581407 kubelet[1294]: W1018 13:17:47.214410    1294 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 13:17:50 pause-581407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:17:50 pause-581407 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:17:50 pause-581407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-581407 -n pause-581407
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-581407 -n pause-581407: exit status 2 (576.932443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-581407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-581407
helpers_test.go:243: (dbg) docker inspect pause-581407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96",
	        "Created": "2025-10-18T13:16:13.554464597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 995711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:16:13.620193479Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/hostname",
	        "HostsPath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/hosts",
	        "LogPath": "/var/lib/docker/containers/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96/0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96-json.log",
	        "Name": "/pause-581407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-581407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-581407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0287d46aaf289a5f014e5baa5cc4c5e61bc76fe4b0316fff9df9b69bc55f5f96",
	                "LowerDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8ffcb757be154260dde70ec598bbc2538b02f4fb36f794898b58c137064d232b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-581407",
	                "Source": "/var/lib/docker/volumes/pause-581407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-581407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-581407",
	                "name.minikube.sigs.k8s.io": "pause-581407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a9033d4eca67c42d4bdab5224d490c08a5e00bde86ddce199d05e81e44ec6b3",
	            "SandboxKey": "/var/run/docker/netns/2a9033d4eca6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34136"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34134"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34135"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-581407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:5e:07:89:80:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "442df33bc12daff735b4003b303b115650e1303690b53f79fadf60e934b85454",
	                    "EndpointID": "2cf927cb00d2fe121445cb2da95fdb8161fe67cba1cfccc07e5169ef07cff410",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-581407",
	                        "0287d46aaf28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-581407 -n pause-581407
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-581407 -n pause-581407: exit status 2 (472.63423ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-581407 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-581407 logs -n 25: (1.679159988s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:11 UTC │ 18 Oct 25 13:13 UTC │
	│ start   │ -p missing-upgrade-972770 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-972770    │ jenkins │ v1.37.0 │ 18 Oct 25 13:11 UTC │ 18 Oct 25 13:12 UTC │
	│ delete  │ -p missing-upgrade-972770                                                                                                                │ missing-upgrade-972770    │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:12 UTC │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:12 UTC │
	│ stop    │ -p kubernetes-upgrade-022190                                                                                                             │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:12 UTC │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:12 UTC │ 18 Oct 25 13:17 UTC │
	│ delete  │ -p NoKubernetes-166782                                                                                                                   │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:13 UTC │
	│ start   │ -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:13 UTC │
	│ ssh     │ -p NoKubernetes-166782 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │                     │
	│ stop    │ -p NoKubernetes-166782                                                                                                                   │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:13 UTC │
	│ start   │ -p NoKubernetes-166782 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:13 UTC │ 18 Oct 25 13:14 UTC │
	│ ssh     │ -p NoKubernetes-166782 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:14 UTC │                     │
	│ delete  │ -p NoKubernetes-166782                                                                                                                   │ NoKubernetes-166782       │ jenkins │ v1.37.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:14 UTC │
	│ start   │ -p stopped-upgrade-311504 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-311504    │ jenkins │ v1.32.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:14 UTC │
	│ stop    │ stopped-upgrade-311504 stop                                                                                                              │ stopped-upgrade-311504    │ jenkins │ v1.32.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:14 UTC │
	│ start   │ -p stopped-upgrade-311504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-311504    │ jenkins │ v1.37.0 │ 18 Oct 25 13:14 UTC │ 18 Oct 25 13:15 UTC │
	│ delete  │ -p stopped-upgrade-311504                                                                                                                │ stopped-upgrade-311504    │ jenkins │ v1.37.0 │ 18 Oct 25 13:15 UTC │ 18 Oct 25 13:15 UTC │
	│ start   │ -p running-upgrade-273873 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-273873    │ jenkins │ v1.32.0 │ 18 Oct 25 13:15 UTC │ 18 Oct 25 13:15 UTC │
	│ start   │ -p running-upgrade-273873 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-273873    │ jenkins │ v1.37.0 │ 18 Oct 25 13:15 UTC │ 18 Oct 25 13:16 UTC │
	│ delete  │ -p running-upgrade-273873                                                                                                                │ running-upgrade-273873    │ jenkins │ v1.37.0 │ 18 Oct 25 13:16 UTC │ 18 Oct 25 13:16 UTC │
	│ start   │ -p pause-581407 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-581407              │ jenkins │ v1.37.0 │ 18 Oct 25 13:16 UTC │ 18 Oct 25 13:17 UTC │
	│ start   │ -p pause-581407 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-581407              │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │ 18 Oct 25 13:17 UTC │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │                     │
	│ start   │ -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-022190 │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │                     │
	│ pause   │ -p pause-581407 --alsologtostderr -v=5                                                                                                   │ pause-581407              │ jenkins │ v1.37.0 │ 18 Oct 25 13:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:17:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:17:26.712562  999649 out.go:179] * Using the docker driver based on existing profile
	I1018 13:17:26.710500  999679 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:17:26.710657  999679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:17:26.710666  999679 out.go:374] Setting ErrFile to fd 2...
	I1018 13:17:26.710671  999679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:17:26.710944  999679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:17:26.711798  999679 out.go:368] Setting JSON to false
	I1018 13:17:26.712876  999679 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17999,"bootTime":1760775448,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:17:26.712970  999679 start.go:141] virtualization:  
	I1018 13:17:26.716253  999679 out.go:179] * [kubernetes-upgrade-022190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:17:26.719292  999679 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:17:26.719384  999679 notify.go:220] Checking for updates...
	I1018 13:17:26.725376  999679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:17:26.728502  999679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:17:26.731313  999679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:17:26.734365  999679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:17:26.737155  999679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:17:26.716349  999649 start.go:305] selected driver: docker
	I1018 13:17:26.716369  999649 start.go:925] validating driver "docker" against &{Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:26.716500  999649 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:17:26.716604  999649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:26.811858  999649 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 13:17:26.800241862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:26.812272  999649 cni.go:84] Creating CNI manager for ""
	I1018 13:17:26.812334  999649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:26.812375  999649 start.go:349] cluster config:
	{Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:26.815807  999649 out.go:179] * Starting "pause-581407" primary control-plane node in "pause-581407" cluster
	I1018 13:17:26.818780  999649 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:17:26.821783  999649 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:17:26.743275  999679 config.go:182] Loaded profile config "kubernetes-upgrade-022190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:26.743910  999679 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:17:26.816802  999679 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:17:26.816932  999679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:26.903375  999679 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:17:26.89148576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:26.903471  999679 docker.go:318] overlay module found
	I1018 13:17:26.906839  999679 out.go:179] * Using the docker driver based on existing profile
	I1018 13:17:26.824725  999649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:26.824785  999649 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:17:26.824795  999649 cache.go:58] Caching tarball of preloaded images
	I1018 13:17:26.824885  999649 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:17:26.824895  999649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:17:26.825056  999649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/config.json ...
	I1018 13:17:26.825288  999649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:17:26.856324  999649 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:17:26.856347  999649 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:17:26.856362  999649 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:17:26.856392  999649 start.go:360] acquireMachinesLock for pause-581407: {Name:mk4d6dae8637ceaf27b6457e0697449ed109c7f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:17:26.856461  999649 start.go:364] duration metric: took 37.432µs to acquireMachinesLock for "pause-581407"
	I1018 13:17:26.856486  999649 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:17:26.856494  999649 fix.go:54] fixHost starting: 
	I1018 13:17:26.856758  999649 cli_runner.go:164] Run: docker container inspect pause-581407 --format={{.State.Status}}
	I1018 13:17:26.909485  999649 fix.go:112] recreateIfNeeded on pause-581407: state=Running err=<nil>
	W1018 13:17:26.909516  999649 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:17:26.910108  999679 start.go:305] selected driver: docker
	I1018 13:17:26.910126  999679 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:26.910201  999679 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:17:26.910997  999679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:17:27.005622  999679 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:17:26.990112413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:17:27.006004  999679 cni.go:84] Creating CNI manager for ""
	I1018 13:17:27.006068  999679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:27.006111  999679 start.go:349] cluster config:
	{Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:27.009341  999679 out.go:179] * Starting "kubernetes-upgrade-022190" primary control-plane node in "kubernetes-upgrade-022190" cluster
	I1018 13:17:27.012107  999679 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:17:27.015362  999679 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:17:27.018313  999679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:27.018385  999679 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:17:27.018411  999679 cache.go:58] Caching tarball of preloaded images
	I1018 13:17:27.018493  999679 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:17:27.018507  999679 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:17:27.018611  999679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/config.json ...
	I1018 13:17:27.018844  999679 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:17:27.044022  999679 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:17:27.044045  999679 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:17:27.044062  999679 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:17:27.044091  999679 start.go:360] acquireMachinesLock for kubernetes-upgrade-022190: {Name:mkdab1493b0fc19844757773d6aecef6d7580418 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:17:27.044197  999679 start.go:364] duration metric: took 71.262µs to acquireMachinesLock for "kubernetes-upgrade-022190"
	I1018 13:17:27.044223  999679 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:17:27.044233  999679 fix.go:54] fixHost starting: 
	I1018 13:17:27.044506  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:27.077811  999679 fix.go:112] recreateIfNeeded on kubernetes-upgrade-022190: state=Running err=<nil>
	W1018 13:17:27.077839  999679 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:17:27.081427  999679 out.go:252] * Updating the running docker "kubernetes-upgrade-022190" container ...
	I1018 13:17:27.081471  999679 machine.go:93] provisionDockerMachine start ...
	I1018 13:17:27.081548  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:27.101747  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.102077  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:27.102093  999679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:17:27.299345  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-022190
	
	I1018 13:17:27.299371  999679 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-022190"
	I1018 13:17:27.299447  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:27.317679  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.318000  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:27.318018  999679 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-022190 && echo "kubernetes-upgrade-022190" | sudo tee /etc/hostname
	I1018 13:17:27.518763  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-022190
	
	I1018 13:17:27.518845  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:27.540800  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.541119  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:27.541138  999679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-022190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-022190/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-022190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:17:27.720297  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:17:27.720323  999679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:17:27.720345  999679 ubuntu.go:190] setting up certificates
	I1018 13:17:27.720355  999679 provision.go:84] configureAuth start
	I1018 13:17:27.720414  999679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-022190
	I1018 13:17:27.760562  999679 provision.go:143] copyHostCerts
	I1018 13:17:27.760642  999679 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:17:27.760660  999679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:17:27.760722  999679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:17:27.760917  999679 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:17:27.760927  999679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:17:27.760956  999679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:17:27.761042  999679 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:17:27.761048  999679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:17:27.761072  999679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:17:27.761128  999679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-022190 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-022190 localhost minikube]
	I1018 13:17:28.052088  999679 provision.go:177] copyRemoteCerts
	I1018 13:17:28.052185  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:17:28.052236  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:28.078375  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:28.208991  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:17:28.234860  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:17:28.287283  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 13:17:28.337750  999679 provision.go:87] duration metric: took 617.371275ms to configureAuth
	I1018 13:17:28.337821  999679 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:17:28.338061  999679 config.go:182] Loaded profile config "kubernetes-upgrade-022190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:28.338231  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:28.360945  999679 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:28.361341  999679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34102 <nil> <nil>}
	I1018 13:17:28.361361  999679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:17:29.071822  999679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:17:29.071842  999679 machine.go:96] duration metric: took 1.990361999s to provisionDockerMachine
	I1018 13:17:29.071853  999679 start.go:293] postStartSetup for "kubernetes-upgrade-022190" (driver="docker")
	I1018 13:17:29.071865  999679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:17:29.071929  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:17:29.071987  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.090215  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.195841  999679 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:17:29.199183  999679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:17:29.199208  999679 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:17:29.199219  999679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:17:29.199272  999679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:17:29.199355  999679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:17:29.199474  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:17:29.207011  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:29.224867  999679 start.go:296] duration metric: took 152.998626ms for postStartSetup
	I1018 13:17:29.224968  999679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:17:29.225024  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.243011  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.349325  999679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:17:29.354264  999679 fix.go:56] duration metric: took 2.310023064s for fixHost
	I1018 13:17:29.354289  999679 start.go:83] releasing machines lock for "kubernetes-upgrade-022190", held for 2.310077464s
	I1018 13:17:29.354360  999679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-022190
	I1018 13:17:29.372124  999679 ssh_runner.go:195] Run: cat /version.json
	I1018 13:17:29.372198  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.372441  999679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:17:29.372498  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:29.415699  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.419834  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:29.561032  999679 ssh_runner.go:195] Run: systemctl --version
	I1018 13:17:29.694127  999679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:17:29.779610  999679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:17:29.789301  999679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:17:29.789428  999679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:17:29.801131  999679 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:17:29.801205  999679 start.go:495] detecting cgroup driver to use...
	I1018 13:17:29.801260  999679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:17:29.801335  999679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:17:29.825333  999679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:17:29.843937  999679 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:17:29.844053  999679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:17:29.866893  999679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:17:29.894991  999679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:17:30.084607  999679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:17:30.337666  999679 docker.go:234] disabling docker service ...
	I1018 13:17:30.337791  999679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:17:30.354457  999679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:17:30.378611  999679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:17:30.587007  999679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:17:30.789289  999679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:17:30.807224  999679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:17:30.827795  999679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:17:30.827884  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.844853  999679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:17:30.844954  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.858975  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.869151  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.882096  999679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:17:30.901596  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.911074  999679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.919702  999679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:30.933433  999679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:17:30.952187  999679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:17:30.968237  999679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:31.175065  999679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:17:31.383196  999679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:17:31.383296  999679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:17:31.387285  999679 start.go:563] Will wait 60s for crictl version
	I1018 13:17:31.387402  999679 ssh_runner.go:195] Run: which crictl
	I1018 13:17:31.391537  999679 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:17:31.420540  999679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:17:31.420640  999679 ssh_runner.go:195] Run: crio --version
	I1018 13:17:31.459159  999679 ssh_runner.go:195] Run: crio --version
	I1018 13:17:31.504123  999679 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:17:26.912476  999649 out.go:252] * Updating the running docker "pause-581407" container ...
	I1018 13:17:26.912508  999649 machine.go:93] provisionDockerMachine start ...
	I1018 13:17:26.912584  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:26.932233  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:26.933622  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:26.933701  999649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:17:27.131346  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-581407
	
	I1018 13:17:27.131366  999649 ubuntu.go:182] provisioning hostname "pause-581407"
	I1018 13:17:27.131435  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:27.163897  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.164213  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:27.164231  999649 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-581407 && echo "pause-581407" | sudo tee /etc/hostname
	I1018 13:17:27.347665  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-581407
	
	I1018 13:17:27.347757  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:27.380116  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:27.380432  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:27.380449  999649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-581407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-581407/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-581407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:17:27.560122  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:17:27.560156  999649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:17:27.560183  999649 ubuntu.go:190] setting up certificates
	I1018 13:17:27.560193  999649 provision.go:84] configureAuth start
	I1018 13:17:27.560257  999649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-581407
	I1018 13:17:27.582877  999649 provision.go:143] copyHostCerts
	I1018 13:17:27.582943  999649 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:17:27.582961  999649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:17:27.583040  999649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:17:27.583133  999649 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:17:27.583139  999649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:17:27.583163  999649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:17:27.583213  999649 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:17:27.583218  999649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:17:27.583245  999649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:17:27.583295  999649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.pause-581407 san=[127.0.0.1 192.168.76.2 localhost minikube pause-581407]
	I1018 13:17:27.784578  999649 provision.go:177] copyRemoteCerts
	I1018 13:17:27.784672  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:17:27.784742  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:27.809286  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:27.929570  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:17:27.954569  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 13:17:27.976720  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:17:28.013992  999649 provision.go:87] duration metric: took 453.773931ms to configureAuth
	I1018 13:17:28.014018  999649 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:17:28.014257  999649 config.go:182] Loaded profile config "pause-581407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:28.014374  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:28.035788  999649 main.go:141] libmachine: Using SSH client type: native
	I1018 13:17:28.036099  999649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34132 <nil> <nil>}
	I1018 13:17:28.036115  999649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:17:31.508135  999679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-022190 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:17:31.525338  999679 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:17:31.529384  999679 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:17:31.529503  999679 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:31.529554  999679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:31.562491  999679 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:31.562513  999679 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:17:31.562576  999679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:31.594231  999679 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:31.594251  999679 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:17:31.594258  999679 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 13:17:31.594360  999679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-022190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:17:31.594442  999679 ssh_runner.go:195] Run: crio config
	I1018 13:17:31.661758  999679 cni.go:84] Creating CNI manager for ""
	I1018 13:17:31.661781  999679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:31.661804  999679 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:17:31.661831  999679 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-022190 NodeName:kubernetes-upgrade-022190 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:17:31.661969  999679 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-022190"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:17:31.662055  999679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:17:31.670293  999679 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:17:31.670418  999679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:17:31.678246  999679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1018 13:17:31.692158  999679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:17:31.706657  999679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1018 13:17:31.720132  999679 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:17:31.724282  999679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:31.847867  999679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:31.863181  999679 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190 for IP: 192.168.85.2
	I1018 13:17:31.863206  999679 certs.go:195] generating shared ca certs ...
	I1018 13:17:31.863222  999679 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:31.863369  999679 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:17:31.863417  999679 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:17:31.863428  999679 certs.go:257] generating profile certs ...
	I1018 13:17:31.863508  999679 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.key
	I1018 13:17:31.863576  999679 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/apiserver.key.69ca1e2d
	I1018 13:17:31.863620  999679 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/proxy-client.key
	I1018 13:17:31.863785  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:17:31.863841  999679 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:17:31.863858  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:17:31.863887  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:17:31.863914  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:17:31.863940  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:17:31.863984  999679 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:31.864650  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:17:31.884265  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:17:31.902721  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:17:31.922291  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:17:31.941248  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 13:17:31.959904  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:17:31.978083  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:17:31.995144  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:17:32.017508  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:17:32.036936  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:17:32.055792  999679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:17:32.074252  999679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:17:32.087743  999679 ssh_runner.go:195] Run: openssl version
	I1018 13:17:32.094373  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:17:32.103068  999679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:17:32.107038  999679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:17:32.107110  999679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:17:32.148569  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:17:32.156610  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:17:32.165169  999679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:32.169161  999679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:32.169247  999679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:32.211640  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:17:32.219750  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:17:32.228155  999679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:17:32.233145  999679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:17:32.233219  999679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:17:32.274349  999679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:17:32.282453  999679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:17:32.286239  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:17:32.328584  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:17:32.369817  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:17:32.412737  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:17:32.454600  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:17:32.497936  999679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:17:32.542003  999679 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-022190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-022190 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:32.542088  999679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:17:32.542152  999679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:17:32.575327  999679 cri.go:89] found id: "8e3f2864e7a88e5d18135c4a49b9a8d0bfe0d1970d8fd27361631de16677bd02"
	I1018 13:17:32.575348  999679 cri.go:89] found id: "65ebe734654c08110bab37ac69645aa818529163d0c54b77fdfa6a2d365dc9da"
	I1018 13:17:32.575354  999679 cri.go:89] found id: "dfc2c882414c80809287a665f372e6f4df67ef4083d36c10fe38f67360817634"
	I1018 13:17:32.575361  999679 cri.go:89] found id: "74d006fb029845a9437436f6107c51cac3db1f7c909ed6ef8629e15f2a2b7e6f"
	I1018 13:17:32.575365  999679 cri.go:89] found id: ""
	I1018 13:17:32.575428  999679 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:17:32.586239  999679 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:17:32.586320  999679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:17:32.593683  999679 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:17:32.593703  999679 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:17:32.593767  999679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:17:32.600869  999679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:17:32.601592  999679 kubeconfig.go:125] found "kubernetes-upgrade-022190" server: "https://192.168.85.2:8443"
	I1018 13:17:32.602434  999679 kapi.go:59] client config for kubernetes-upgrade-022190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:32.602928  999679 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 13:17:32.602945  999679 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 13:17:32.602952  999679 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 13:17:32.602957  999679 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 13:17:32.602967  999679 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 13:17:32.603302  999679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:17:32.610702  999679 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 13:17:32.610778  999679 kubeadm.go:601] duration metric: took 17.068126ms to restartPrimaryControlPlane
	I1018 13:17:32.610794  999679 kubeadm.go:402] duration metric: took 68.807535ms to StartCluster
	I1018 13:17:32.610810  999679 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:32.610893  999679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:17:32.611886  999679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:32.612129  999679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:17:32.612359  999679 config.go:182] Loaded profile config "kubernetes-upgrade-022190": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:32.612425  999679 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:17:32.612611  999679 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-022190"
	I1018 13:17:32.612635  999679 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-022190"
	W1018 13:17:32.612644  999679 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:17:32.612685  999679 host.go:66] Checking if "kubernetes-upgrade-022190" exists ...
	I1018 13:17:32.612825  999679 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-022190"
	I1018 13:17:32.612863  999679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-022190"
	I1018 13:17:32.613120  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:32.613285  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:32.618346  999679 out.go:179] * Verifying Kubernetes components...
	I1018 13:17:32.621084  999679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:32.647774  999679 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:17:33.474908  999649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:17:33.474930  999649 machine.go:96] duration metric: took 6.562414337s to provisionDockerMachine
	I1018 13:17:33.474940  999649 start.go:293] postStartSetup for "pause-581407" (driver="docker")
	I1018 13:17:33.474951  999649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:17:33.475012  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:17:33.475050  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.501382  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:33.612067  999649 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:17:33.615695  999649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:17:33.615726  999649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:17:33.615743  999649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:17:33.615800  999649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:17:33.615907  999649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:17:33.616020  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:17:33.623890  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:33.642468  999649 start.go:296] duration metric: took 167.512703ms for postStartSetup
	I1018 13:17:33.642553  999649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:17:33.642614  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.662105  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:33.765115  999649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:17:33.770030  999649 fix.go:56] duration metric: took 6.913529211s for fixHost
	I1018 13:17:33.770056  999649 start.go:83] releasing machines lock for "pause-581407", held for 6.913582003s
	I1018 13:17:33.770125  999649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-581407
	I1018 13:17:33.786800  999649 ssh_runner.go:195] Run: cat /version.json
	I1018 13:17:33.786839  999649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:17:33.786862  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.786904  999649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-581407
	I1018 13:17:33.806539  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:33.828176  999649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34132 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/pause-581407/id_rsa Username:docker}
	I1018 13:17:34.011340  999649 ssh_runner.go:195] Run: systemctl --version
	I1018 13:17:34.018370  999649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:17:34.060533  999649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:17:34.065106  999649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:17:34.065185  999649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:17:34.074541  999649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:17:34.074566  999649 start.go:495] detecting cgroup driver to use...
	I1018 13:17:34.074601  999649 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:17:34.074654  999649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:17:34.090986  999649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:17:34.107207  999649 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:17:34.107276  999649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:17:34.125299  999649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:17:34.142862  999649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:17:34.300649  999649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:17:34.439868  999649 docker.go:234] disabling docker service ...
	I1018 13:17:34.439956  999649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:17:34.461979  999649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:17:34.479456  999649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:17:34.634387  999649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:17:34.772493  999649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:17:34.788759  999649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:17:34.806459  999649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:17:34.806529  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.815229  999649 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:17:34.815300  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.825376  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.836364  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.849757  999649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:17:34.858181  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.871115  999649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.879927  999649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:17:34.889939  999649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:17:34.898070  999649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:17:34.905522  999649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:35.052347  999649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:17:35.227050  999649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:17:35.227175  999649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:17:35.231142  999649 start.go:563] Will wait 60s for crictl version
	I1018 13:17:35.231210  999649 ssh_runner.go:195] Run: which crictl
	I1018 13:17:35.234926  999649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:17:35.262917  999649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:17:35.263066  999649 ssh_runner.go:195] Run: crio --version
	I1018 13:17:35.293648  999649 ssh_runner.go:195] Run: crio --version
	I1018 13:17:35.329621  999649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:17:35.332535  999649 cli_runner.go:164] Run: docker network inspect pause-581407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:17:35.350422  999649 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 13:17:35.354702  999649 kubeadm.go:883] updating cluster {Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:17:35.354844  999649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:17:35.354890  999649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:35.393429  999649 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:35.393454  999649 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:17:35.393512  999649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:17:35.420435  999649 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:17:35.420463  999649 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:17:35.420473  999649 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 13:17:35.420582  999649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-581407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:17:35.420674  999649 ssh_runner.go:195] Run: crio config
	I1018 13:17:35.479207  999649 cni.go:84] Creating CNI manager for ""
	I1018 13:17:35.479298  999649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:17:35.479341  999649 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:17:35.479382  999649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-581407 NodeName:pause-581407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:17:35.479547  999649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-581407"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:17:35.479632  999649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:17:35.488967  999649 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:17:35.489036  999649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:17:35.498172  999649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 13:17:35.512091  999649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:17:35.525306  999649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 13:17:35.538630  999649 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:17:35.542538  999649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:35.680863  999649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:35.694659  999649 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407 for IP: 192.168.76.2
	I1018 13:17:35.694679  999649 certs.go:195] generating shared ca certs ...
	I1018 13:17:35.694694  999649 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:35.694919  999649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:17:35.694994  999649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:17:35.695009  999649 certs.go:257] generating profile certs ...
	I1018 13:17:35.695122  999649 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.key
	I1018 13:17:35.695216  999649 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/apiserver.key.4c14d249
	I1018 13:17:35.695290  999649 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/proxy-client.key
	I1018 13:17:35.695424  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:17:35.695477  999649 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:17:35.695494  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:17:35.695519  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:17:35.695584  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:17:35.695617  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:17:35.695743  999649 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:17:35.696428  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:17:35.717414  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:17:35.735033  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:17:35.753881  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:17:35.771693  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 13:17:35.789182  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 13:17:35.814586  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:17:35.837866  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:17:35.860683  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:17:35.881879  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:17:35.902257  999649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:17:35.919536  999649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:17:35.933405  999649 ssh_runner.go:195] Run: openssl version
	I1018 13:17:35.939515  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:17:35.948131  999649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:35.952063  999649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:35.952199  999649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:17:35.993254  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:17:36.002298  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:17:36.014965  999649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:17:36.019261  999649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:17:36.019386  999649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:17:36.061387  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:17:36.069596  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:17:36.078373  999649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:17:36.082207  999649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:17:36.082312  999649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:17:36.123523  999649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:17:36.131488  999649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:17:36.135379  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:17:36.188563  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:17:36.238957  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:17:36.305353  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:17:36.375871  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:17:36.523382  999649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:17:32.651038  999679 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:32.651065  999679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:17:32.651145  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:32.656012  999679 kapi.go:59] client config for kubernetes-upgrade-022190: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kubernetes-upgrade-022190/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:32.656328  999679 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-022190"
	W1018 13:17:32.656346  999679 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:17:32.656371  999679 host.go:66] Checking if "kubernetes-upgrade-022190" exists ...
	I1018 13:17:32.656800  999679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-022190 --format={{.State.Status}}
	I1018 13:17:32.701360  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:32.714761  999679 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:32.714783  999679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:17:32.714856  999679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-022190
	I1018 13:17:32.749553  999679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34102 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/kubernetes-upgrade-022190/id_rsa Username:docker}
	I1018 13:17:32.837610  999679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:32.846795  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:32.853954  999679 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:17:32.854039  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:32.876593  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:32.945380  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:32.945419  999679 retry.go:31] will retry after 238.421339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 13:17:32.962116  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:32.962150  999679 retry.go:31] will retry after 308.496613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.184524  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:33.253292  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.253325  999679 retry.go:31] will retry after 195.153176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.271506  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:33.345549  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.345578  999679 retry.go:31] will retry after 476.50141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.354853  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:33.449235  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:33.543331  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.543369  999679 retry.go:31] will retry after 559.285396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.822911  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:33.854077  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:33.921464  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:33.921493  999679 retry.go:31] will retry after 525.012815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.102856  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:34.190672  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.190707  999679 retry.go:31] will retry after 570.237713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.354985  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:34.446730  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:34.538740  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.538773  999679 retry.go:31] will retry after 1.253477765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.761164  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:34.848198  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.848228  999679 retry.go:31] will retry after 1.805738404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:34.854542  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:35.354261  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:35.792841  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:35.854212  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:35.892777  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:35.892804  999679 retry.go:31] will retry after 1.428518342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:36.354179  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:36.654969  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:36.619316  999649 kubeadm.go:400] StartCluster: {Name:pause-581407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-581407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:17:36.619435  999649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:17:36.619498  999649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:17:36.694930  999649 cri.go:89] found id: "3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441"
	I1018 13:17:36.695027  999649 cri.go:89] found id: "99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71"
	I1018 13:17:36.695049  999649 cri.go:89] found id: "db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715"
	I1018 13:17:36.695068  999649 cri.go:89] found id: "cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6"
	I1018 13:17:36.695098  999649 cri.go:89] found id: "1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a"
	I1018 13:17:36.695121  999649 cri.go:89] found id: "32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20"
	I1018 13:17:36.695140  999649 cri.go:89] found id: "1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1"
	I1018 13:17:36.695200  999649 cri.go:89] found id: "70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b"
	I1018 13:17:36.695229  999649 cri.go:89] found id: "964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	I1018 13:17:36.695275  999649 cri.go:89] found id: "dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8"
	I1018 13:17:36.695304  999649 cri.go:89] found id: "69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0"
	I1018 13:17:36.695332  999649 cri.go:89] found id: "a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1"
	I1018 13:17:36.695372  999649 cri.go:89] found id: "7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043"
	I1018 13:17:36.695394  999649 cri.go:89] found id: "4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680"
	I1018 13:17:36.695424  999649 cri.go:89] found id: ""
	I1018 13:17:36.695536  999649 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:17:36.730995  999649 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:17:36Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:17:36.731158  999649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:17:36.744451  999649 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:17:36.744529  999649 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:17:36.744646  999649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:17:36.755553  999649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:17:36.756405  999649 kubeconfig.go:125] found "pause-581407" server: "https://192.168.76.2:8443"
	I1018 13:17:36.757741  999649 kapi.go:59] client config for pause-581407: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:36.758570  999649 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 13:17:36.758699  999649 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 13:17:36.758735  999649 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 13:17:36.758754  999649 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 13:17:36.758790  999649 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 13:17:36.759264  999649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:17:36.770100  999649 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 13:17:36.770204  999649 kubeadm.go:601] duration metric: took 25.65236ms to restartPrimaryControlPlane
	I1018 13:17:36.770228  999649 kubeadm.go:402] duration metric: took 150.921439ms to StartCluster
	I1018 13:17:36.770279  999649 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:36.770409  999649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:17:36.771578  999649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:17:36.771997  999649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:17:36.772644  999649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:17:36.772747  999649 config.go:182] Loaded profile config "pause-581407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:17:36.775857  999649 out.go:179] * Verifying Kubernetes components...
	I1018 13:17:36.779006  999649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:17:36.779195  999649 out.go:179] * Enabled addons: 
	I1018 13:17:36.782146  999649 addons.go:514] duration metric: took 9.495028ms for enable addons: enabled=[]
	I1018 13:17:37.035639  999649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:17:37.059765  999649 node_ready.go:35] waiting up to 6m0s for node "pause-581407" to be "Ready" ...
	W1018 13:17:36.807012  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:36.807045  999679 retry.go:31] will retry after 1.780691464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:36.854309  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:37.322268  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:37.354746  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:37.447700  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:37.447731  999679 retry.go:31] will retry after 2.321151428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:37.854148  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:38.354432  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:38.588479  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:38.716322  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:38.716361  999679 retry.go:31] will retry after 1.594670935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:38.854715  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:39.354601  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:39.769638  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:39.855128  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:39.895983  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:39.896071  999679 retry.go:31] will retry after 2.664209537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:40.311225  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:17:40.354820  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:40.415315  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:40.415411  999679 retry.go:31] will retry after 5.056501708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:40.854682  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:41.354573  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:41.654846  999649 node_ready.go:49] node "pause-581407" is "Ready"
	I1018 13:17:41.654878  999649 node_ready.go:38] duration metric: took 4.595069392s for node "pause-581407" to be "Ready" ...
	I1018 13:17:41.654891  999649 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:17:41.655006  999649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:41.672353  999649 api_server.go:72] duration metric: took 4.900151249s to wait for apiserver process to appear ...
	I1018 13:17:41.672379  999649 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:17:41.672399  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:41.686877  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 13:17:41.686910  999649 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 13:17:42.173530  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:42.187978  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:42.188043  999649 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:17:42.672523  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:42.686764  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:42.686805  999649 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:17:43.173518  999649 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:17:43.182257  999649 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:17:43.183516  999649 api_server.go:141] control plane version: v1.34.1
	I1018 13:17:43.183543  999649 api_server.go:131] duration metric: took 1.511156868s to wait for apiserver health ...
	I1018 13:17:43.183554  999649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:17:43.187136  999649 system_pods.go:59] 7 kube-system pods found
	I1018 13:17:43.187174  999649 system_pods.go:61] "coredns-66bc5c9577-tzdm5" [9d2e1e6f-52ac-477c-b94f-c5b39e401dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:17:43.187183  999649 system_pods.go:61] "etcd-pause-581407" [e98f9f03-1631-4c2e-ba26-43fd3f26abfd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:17:43.187190  999649 system_pods.go:61] "kindnet-8jjd5" [ce4339d5-6ec9-44ba-891a-207552e6e2d8] Running
	I1018 13:17:43.187198  999649 system_pods.go:61] "kube-apiserver-pause-581407" [757bd729-dacd-4d7f-a80b-b8853434f4f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:17:43.187216  999649 system_pods.go:61] "kube-controller-manager-pause-581407" [bffbe8fa-169d-4e7a-9c06-a73513ea2c20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:17:43.187228  999649 system_pods.go:61] "kube-proxy-4l8qb" [d0d30679-b467-4886-9e01-214192aa7e54] Running
	I1018 13:17:43.187235  999649 system_pods.go:61] "kube-scheduler-pause-581407" [5697fd37-f14d-4489-8d2d-345ae7bbd321] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:17:43.187241  999649 system_pods.go:74] duration metric: took 3.671123ms to wait for pod list to return data ...
	I1018 13:17:43.187252  999649 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:17:43.189989  999649 default_sa.go:45] found service account: "default"
	I1018 13:17:43.190071  999649 default_sa.go:55] duration metric: took 2.811225ms for default service account to be created ...
	I1018 13:17:43.190096  999649 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:17:43.196445  999649 system_pods.go:86] 7 kube-system pods found
	I1018 13:17:43.196482  999649 system_pods.go:89] "coredns-66bc5c9577-tzdm5" [9d2e1e6f-52ac-477c-b94f-c5b39e401dde] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:17:43.196492  999649 system_pods.go:89] "etcd-pause-581407" [e98f9f03-1631-4c2e-ba26-43fd3f26abfd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:17:43.196498  999649 system_pods.go:89] "kindnet-8jjd5" [ce4339d5-6ec9-44ba-891a-207552e6e2d8] Running
	I1018 13:17:43.196505  999649 system_pods.go:89] "kube-apiserver-pause-581407" [757bd729-dacd-4d7f-a80b-b8853434f4f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:17:43.196512  999649 system_pods.go:89] "kube-controller-manager-pause-581407" [bffbe8fa-169d-4e7a-9c06-a73513ea2c20] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:17:43.196517  999649 system_pods.go:89] "kube-proxy-4l8qb" [d0d30679-b467-4886-9e01-214192aa7e54] Running
	I1018 13:17:43.196523  999649 system_pods.go:89] "kube-scheduler-pause-581407" [5697fd37-f14d-4489-8d2d-345ae7bbd321] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:17:43.196534  999649 system_pods.go:126] duration metric: took 6.431976ms to wait for k8s-apps to be running ...
	I1018 13:17:43.196547  999649 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:17:43.196608  999649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:17:43.209945  999649 system_svc.go:56] duration metric: took 13.375146ms WaitForService to wait for kubelet
	I1018 13:17:43.209977  999649 kubeadm.go:586] duration metric: took 6.437778963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:17:43.209998  999649 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:17:43.213091  999649 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:17:43.213127  999649 node_conditions.go:123] node cpu capacity is 2
	I1018 13:17:43.213141  999649 node_conditions.go:105] duration metric: took 3.136652ms to run NodePressure ...
	I1018 13:17:43.213153  999649 start.go:241] waiting for startup goroutines ...
	I1018 13:17:43.213161  999649 start.go:246] waiting for cluster config update ...
	I1018 13:17:43.213169  999649 start.go:255] writing updated cluster config ...
	I1018 13:17:43.213470  999649 ssh_runner.go:195] Run: rm -f paused
	I1018 13:17:43.217003  999649 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:17:43.217691  999649 kapi.go:59] client config for pause-581407: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/profiles/pause-581407/client.key", CAFile:"/home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 13:17:43.220847  999649 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tzdm5" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 13:17:45.231221  999649 pod_ready.go:104] pod "coredns-66bc5c9577-tzdm5" is not "Ready", error: <nil>
	I1018 13:17:41.855064  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:42.354827  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:42.561223  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1018 13:17:42.648281  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:42.648318  999679 retry.go:31] will retry after 6.153971021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:42.854852  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:43.355078  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:43.854921  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:44.355070  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:44.854172  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:45.354683  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:45.473365  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:45.568339  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:45.568372  999679 retry.go:31] will retry after 5.478216126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:45.854889  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:46.354216  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:47.726752  999649 pod_ready.go:104] pod "coredns-66bc5c9577-tzdm5" is not "Ready", error: <nil>
	I1018 13:17:48.726927  999649 pod_ready.go:94] pod "coredns-66bc5c9577-tzdm5" is "Ready"
	I1018 13:17:48.726957  999649 pod_ready.go:86] duration metric: took 5.506085138s for pod "coredns-66bc5c9577-tzdm5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.729816  999649 pod_ready.go:83] waiting for pod "etcd-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.734380  999649 pod_ready.go:94] pod "etcd-pause-581407" is "Ready"
	I1018 13:17:48.734411  999649 pod_ready.go:86] duration metric: took 4.565637ms for pod "etcd-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.736845  999649 pod_ready.go:83] waiting for pod "kube-apiserver-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.741768  999649 pod_ready.go:94] pod "kube-apiserver-pause-581407" is "Ready"
	I1018 13:17:48.741794  999649 pod_ready.go:86] duration metric: took 4.920242ms for pod "kube-apiserver-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.744415  999649 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:48.925251  999649 pod_ready.go:94] pod "kube-controller-manager-pause-581407" is "Ready"
	I1018 13:17:48.925281  999649 pod_ready.go:86] duration metric: took 180.837856ms for pod "kube-controller-manager-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:49.125619  999649 pod_ready.go:83] waiting for pod "kube-proxy-4l8qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:49.525394  999649 pod_ready.go:94] pod "kube-proxy-4l8qb" is "Ready"
	I1018 13:17:49.525493  999649 pod_ready.go:86] duration metric: took 399.83359ms for pod "kube-proxy-4l8qb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:49.725895  999649 pod_ready.go:83] waiting for pod "kube-scheduler-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:50.126818  999649 pod_ready.go:94] pod "kube-scheduler-pause-581407" is "Ready"
	I1018 13:17:50.126897  999649 pod_ready.go:86] duration metric: took 400.925301ms for pod "kube-scheduler-pause-581407" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:17:50.126926  999649 pod_ready.go:40] duration metric: took 6.909888289s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:17:50.201402  999649 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:17:50.204763  999649 out.go:179] * Done! kubectl is now configured to use "pause-581407" cluster and "default" namespace by default
	I1018 13:17:46.854995  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:47.354216  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:47.854226  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:48.354761  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:48.802531  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:48.855035  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 13:17:48.875219  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:48.875249  999679 retry.go:31] will retry after 5.966238101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:49.354887  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:49.854699  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:50.354983  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:50.854686  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:51.047531  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1018 13:17:51.129212  999679 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:51.129251  999679 retry.go:31] will retry after 10.969316836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 13:17:51.354705  999679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:17:51.388689  999679 api_server.go:72] duration metric: took 18.776516216s to wait for apiserver process to appear ...
	I1018 13:17:51.388718  999679 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:17:51.388747  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:51.389039  999679 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 13:17:51.889703  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:54.842906  999679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:17:55.298869  999679 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 13:17:55.298900  999679 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 13:17:55.298915  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:55.576108  999679 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 13:17:55.576146  999679 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 13:17:55.576163  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:55.708319  999679 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:55.708349  999679 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:17:55.889619  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:55.948475  999679 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:55.948561  999679 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:17:56.388850  999679 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:17:56.402748  999679 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:17:56.402830  999679 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.533104503Z" level=info msg="Created container db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715: kube-system/kube-apiserver-pause-581407/kube-apiserver" id=c9241dfe-f070-4251-8cb5-a09c9f8960a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.53451309Z" level=info msg="Starting container: db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715" id=79548e41-645b-4cdf-9ba7-198d55287f08 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.537353009Z" level=info msg="Started container" PID=2324 containerID=db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715 description=kube-system/kube-apiserver-pause-581407/kube-apiserver id=79548e41-645b-4cdf-9ba7-198d55287f08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=393fcdeafc5fbe5639a7d6449f86ca498be47ff4c823113792874b7633d1fe4d
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.550203973Z" level=info msg="Started container" PID=2315 containerID=cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6 description=kube-system/coredns-66bc5c9577-tzdm5/coredns id=3fa00fb4-2e62-45d7-a067-bf863d439b2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=821c92b817eaaba160d52d9d50fff8a4d8f800204209f435e68d9493b1f6e807
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.57009679Z" level=info msg="Created container 99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71: kube-system/kube-scheduler-pause-581407/kube-scheduler" id=5bc71247-93ce-4069-bcdb-a7592160aedf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.570661751Z" level=info msg="Starting container: 99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71" id=2edec95d-156d-49ee-9814-08330ae84435 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.576162815Z" level=info msg="Created container 3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441: kube-system/kube-proxy-4l8qb/kube-proxy" id=ee970cab-b4fb-4eef-bc12-e2d40569d3a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.578844767Z" level=info msg="Started container" PID=2336 containerID=99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71 description=kube-system/kube-scheduler-pause-581407/kube-scheduler id=2edec95d-156d-49ee-9814-08330ae84435 name=/runtime.v1.RuntimeService/StartContainer sandboxID=218f64315d95613914a0ac1670fb9e5a28a88a1fcedc30bfd9cef6a9a9373b8b
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.57973335Z" level=info msg="Starting container: 3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441" id=65bd4147-401d-4a7d-b071-c36c0bd722c0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:17:36 pause-581407 crio[2059]: time="2025-10-18T13:17:36.596077498Z" level=info msg="Started container" PID=2332 containerID=3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441 description=kube-system/kube-proxy-4l8qb/kube-proxy id=65bd4147-401d-4a7d-b071-c36c0bd722c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0d94cf0a059bf19a5ddbf6bf6a3021a3c22bd1fb7fcf2f816611f76d75ce97e
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.658825681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.662423613Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.662459527Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.662482592Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.665995699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.666031769Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.666056844Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.669435057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.669475066Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.669498516Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.672750878Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.672797172Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.672820212Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.676752326Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:17:46 pause-581407 crio[2059]: time="2025-10-18T13:17:46.676923839Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	3c02f603c1a8e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   c0d94cf0a059b       kube-proxy-4l8qb                       kube-system
	99118ec2adf04       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   218f64315d956       kube-scheduler-pause-581407            kube-system
	db8db3b35d1a0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   393fcdeafc5fb       kube-apiserver-pause-581407            kube-system
	cf443ebecbf84       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   821c92b817eaa       coredns-66bc5c9577-tzdm5               kube-system
	1d867435092e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   bb38fa8e9f53b       etcd-pause-581407                      kube-system
	32797f415a8c2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   94243e5870ff6       kube-controller-manager-pause-581407   kube-system
	1b1355c4f0d44       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   9d7ebc83e257f       kindnet-8jjd5                          kube-system
	70e591f358e98       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago       Exited              coredns                   0                   821c92b817eaa       coredns-66bc5c9577-tzdm5               kube-system
	964b1c1291135       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   c0d94cf0a059b       kube-proxy-4l8qb                       kube-system
	dadb5dd59eca9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   9d7ebc83e257f       kindnet-8jjd5                          kube-system
	69a5d51f6f41f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   218f64315d956       kube-scheduler-pause-581407            kube-system
	a5a7eba2cfa84       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   bb38fa8e9f53b       etcd-pause-581407                      kube-system
	7adb65405ca12       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   393fcdeafc5fb       kube-apiserver-pause-581407            kube-system
	4d2ab802325be       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   94243e5870ff6       kube-controller-manager-pause-581407   kube-system
	
	
	==> coredns [70e591f358e983efcdf4f01017e333dfaa6bfb26b93122e90d41ce990b9ac96b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33444 - 12329 "HINFO IN 4171231459783145117.769488335896153306. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032623501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf443ebecbf84aad3f84640426ee167fe2cc07c7cae08b05704e30f3642be9f6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45046 - 37112 "HINFO IN 5890768828181610452.1586750936055456133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019392842s
	
	
	==> describe nodes <==
	Name:               pause-581407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-581407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=pause-581407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_16_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-581407
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:17:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:16:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:16:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:16:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:17:37 +0000   Sat, 18 Oct 2025 13:17:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-581407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d9f2dc28-2898-4eca-a2e0-ac219f0a2925
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tzdm5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-581407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-8jjd5                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-581407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-pause-581407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-4l8qb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-581407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 89s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node pause-581407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node pause-581407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s (x8 over 89s)  kubelet          Node pause-581407 status is now: NodeHasSufficientPID
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-581407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-581407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-581407 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s                node-controller  Node pause-581407 event: Registered Node pause-581407 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-581407 status is now: NodeReady
	  Normal   RegisteredNode           12s                node-controller  Node pause-581407 event: Registered Node pause-581407 in Controller
	
	
	==> dmesg <==
	[ +36.492252] overlayfs: idmapped layers are currently not supported
	[Oct18 12:43] overlayfs: idmapped layers are currently not supported
	[Oct18 12:44] overlayfs: idmapped layers are currently not supported
	[  +3.556272] overlayfs: idmapped layers are currently not supported
	[Oct18 12:47] overlayfs: idmapped layers are currently not supported
	[Oct18 12:51] overlayfs: idmapped layers are currently not supported
	[Oct18 12:53] overlayfs: idmapped layers are currently not supported
	[Oct18 12:57] overlayfs: idmapped layers are currently not supported
	[Oct18 12:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d867435092e2ae05fa215ae5358374d93d6b6d49cf7df765b0828501daa311a] <==
	{"level":"warn","ts":"2025-10-18T13:17:39.509037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.530705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.549232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.566184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.585982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.608408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.627598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.650280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.671945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.719008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.721708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.732270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.750078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.778734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.824437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.880849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.912524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:39.949685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.011770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.043885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.141513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.162376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.200287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.239140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:17:40.339930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	
	
	==> etcd [a5a7eba2cfa84bb8e262a1c5817166f519b4e09375861ce4a544520381703cc1] <==
	{"level":"warn","ts":"2025-10-18T13:16:32.917418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:32.954014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.034422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.061702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.093053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.130308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:16:33.180930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35058","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T13:17:28.296217Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T13:17:28.296270Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-581407","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-18T13:17:28.296368Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T13:17:28.456599Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T13:17:28.458171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T13:17:28.458294Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-18T13:17:28.458422Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T13:17:28.458464Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458731Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458797Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T13:17:28.458830Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458899Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T13:17:28.458933Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T13:17:28.458968Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T13:17:28.463008Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-18T13:17:28.463183Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T13:17:28.463247Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T13:17:28.463290Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-581407","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 13:17:57 up  5:00,  0 user,  load average: 2.80, 2.41, 1.98
	Linux pause-581407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b1355c4f0d44581bb6ae756cce066454470ecb4bbe2947c437b4450819922e1] <==
	I1018 13:17:36.479869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:17:36.480252       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:17:36.480381       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:17:36.480393       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:17:36.480406       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:17:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:17:36.716495       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:17:36.716837       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:17:36.716886       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:17:36.717048       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 13:17:41.819341       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:17:41.819477       1 metrics.go:72] Registering metrics
	I1018 13:17:41.819565       1 controller.go:711] "Syncing nftables rules"
	I1018 13:17:46.658470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:17:46.658524       1 main.go:301] handling current node
	I1018 13:17:56.664296       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:17:56.664329       1 main.go:301] handling current node
	
	
	==> kindnet [dadb5dd59eca975bd8d89eca080be31edffaa1af272cf6f32406ac8cd85fc5c8] <==
	I1018 13:16:42.415427       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:16:42.416819       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:16:42.416997       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:16:42.417041       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:16:42.417089       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:16:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:16:42.633692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:16:42.633776       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:16:42.633812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:16:42.634652       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:17:12.634440       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:17:12.634446       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:17:12.634585       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:17:12.634586       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:17:13.934052       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:17:13.934158       1 metrics.go:72] Registering metrics
	I1018 13:17:13.934254       1 controller.go:711] "Syncing nftables rules"
	I1018 13:17:22.639828       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:17:22.639861       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7adb65405ca120df8f04c836231d865be3c0d67b70d53b94513214e3425de043] <==
	I1018 13:16:34.183721       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:16:34.183741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 13:16:34.183836       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:16:34.203497       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 13:16:34.203606       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:34.236513       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:34.236643       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:16:34.893223       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:16:34.898728       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:16:34.898765       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:16:35.665585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:16:35.717400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:16:35.796951       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:16:35.806531       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 13:16:35.808169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:16:35.813351       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:16:36.064057       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:16:36.851950       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:16:36.875962       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:16:36.890352       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 13:16:41.807320       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 13:16:41.928982       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:41.943095       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:16:42.108776       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:17:28.279569       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [db8db3b35d1a07329eea1f214d516d4aa4ab4d1b78ce5ff940efe0fd7d18d715] <==
	I1018 13:17:41.692923       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 13:17:41.697746       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 13:17:41.698489       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 13:17:41.698641       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 13:17:41.698721       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:17:41.698772       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:17:41.698800       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:17:41.698827       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:17:41.699203       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:17:41.699258       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:17:41.715897       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:17:41.715996       1 policy_source.go:240] refreshing policies
	I1018 13:17:41.716237       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 13:17:41.733671       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:17:41.734440       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 13:17:41.767521       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:17:41.775730       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 13:17:41.776141       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:17:41.776706       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 13:17:42.380057       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:17:43.742008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:17:45.136634       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:17:45.201623       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:17:45.413490       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:17:45.497407       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [32797f415a8c26a1b4aa88afa3f9137729690bd7d45311a68519beaccac43d20] <==
	I1018 13:17:45.127769       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 13:17:45.129973       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 13:17:45.132972       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:17:45.156879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:17:45.164880       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:17:45.171931       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:17:45.172079       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:17:45.172178       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:17:45.174901       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:17:45.175003       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:17:45.172200       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:17:45.172212       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:17:45.172226       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:17:45.172236       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:17:45.179643       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:17:45.180909       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:17:45.180987       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:17:45.181030       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:17:45.181069       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:17:45.179686       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:17:45.187909       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 13:17:45.188461       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:17:45.188533       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 13:17:45.196258       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 13:17:45.200846       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-controller-manager [4d2ab802325be7559c201bd12b2e174c54b89efbf9a5e54f0f6d4ff1d99f5680] <==
	I1018 13:16:41.092212       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:16:41.098019       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:16:41.101372       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 13:16:41.101433       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:16:41.101580       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 13:16:41.101608       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 13:16:41.102119       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:16:41.102267       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 13:16:41.103240       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:16:41.103526       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-581407" podCIDRs=["10.244.0.0/24"]
	I1018 13:16:41.103581       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 13:16:41.104131       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 13:16:41.104165       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:16:41.104273       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:16:41.104517       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:16:41.104988       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:16:41.105019       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:16:41.106206       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:16:41.119518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:16:41.123688       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 13:16:41.124762       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 13:16:41.130032       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:16:41.142180       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 13:16:41.148971       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:17:26.064007       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3c02f603c1a8ed9fdda1568f82331b865a0b965842aa281d525b5329f7f80441] <==
	I1018 13:17:38.390852       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:17:39.507893       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:17:41.770425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:17:41.770467       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:17:41.770528       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:17:41.814095       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:17:41.814216       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:17:41.822926       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:17:41.823358       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:17:41.823448       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:17:41.826122       1 config.go:200] "Starting service config controller"
	I1018 13:17:41.826208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:17:41.826255       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:17:41.826304       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:17:41.826347       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:17:41.826403       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:17:41.827489       1 config.go:309] "Starting node config controller"
	I1018 13:17:41.827565       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:17:41.827596       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:17:41.926955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:17:41.927007       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:17:41.927029       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361] <==
	I1018 13:16:42.363628       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:16:42.461752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:16:42.568335       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:16:42.568371       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:16:42.568436       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:16:42.747541       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:16:42.747590       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:16:42.824357       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:16:42.824709       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:16:42.824728       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:16:42.826236       1 config.go:200] "Starting service config controller"
	I1018 13:16:42.826246       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:16:42.826267       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:16:42.826271       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:16:42.826284       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:16:42.826288       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:16:42.826918       1 config.go:309] "Starting node config controller"
	I1018 13:16:42.826925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:16:42.826931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:16:42.927395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:16:42.927429       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:16:42.927477       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69a5d51f6f41fad47a432f28e5b9ebd476f9e34e0169affa15deeb3be20b5ef0] <==
	E1018 13:16:34.151004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:16:34.151055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:16:34.151166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:16:34.151225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:16:34.151265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:16:34.151641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:16:34.155971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 13:16:34.982506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 13:16:34.985938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:16:34.987093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 13:16:34.988782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:16:34.996007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 13:16:35.009481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:16:35.082958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:16:35.144464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 13:16:35.223125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:16:35.303674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:16:35.359904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1018 13:16:36.926980       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:28.281154       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 13:17:28.281247       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 13:17:28.281263       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 13:17:28.281276       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:28.281366       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 13:17:28.281382       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [99118ec2adf0430d39b6266a9bf69e3f2ff2203b9c7756876baa5111ff1a4b71] <==
	I1018 13:17:39.762186       1 serving.go:386] Generated self-signed cert in-memory
	W1018 13:17:41.600187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:17:41.600225       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:17:41.600236       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:17:41.600243       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:17:41.693486       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:17:41.693587       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:17:41.700189       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:17:41.700371       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:41.700428       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:17:41.700469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:17:41.800985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.279777    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b3f1fb1812d4a14a76248251f4c7e63" pod="kube-system/kube-controller-manager-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.280036    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="463d758dafa1340c8fbb795faadc1d16" pod="kube-system/kube-apiserver-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: I1018 13:17:36.327127    1294 scope.go:117] "RemoveContainer" containerID="964b1c1291135dc51e3172aee8941d98ca865d7c9c6df299ebfbc006af73f361"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.327924    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-tzdm5\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9d2e1e6f-52ac-477c-b94f-c5b39e401dde" pod="kube-system/coredns-66bc5c9577-tzdm5"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.328289    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5240846d13aca7e9185d8b56b2c8d0c0" pod="kube-system/etcd-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.328616    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6b3f1fb1812d4a14a76248251f4c7e63" pod="kube-system/kube-controller-manager-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.328916    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="463d758dafa1340c8fbb795faadc1d16" pod="kube-system/kube-apiserver-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.329255    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-581407\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="8e6dfbfb3deed9f4c9553aa21d451053" pod="kube-system/kube-scheduler-pause-581407"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.329565    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-8jjd5\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="ce4339d5-6ec9-44ba-891a-207552e6e2d8" pod="kube-system/kindnet-8jjd5"
	Oct 18 13:17:36 pause-581407 kubelet[1294]: E1018 13:17:36.330100    1294 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l8qb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="d0d30679-b467-4886-9e01-214192aa7e54" pod="kube-system/kube-proxy-4l8qb"
	Oct 18 13:17:37 pause-581407 kubelet[1294]: W1018 13:17:37.199500    1294 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.540824    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-581407\" is forbidden: User \"system:node:pause-581407\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" podUID="6b3f1fb1812d4a14a76248251f4c7e63" pod="kube-system/kube-controller-manager-pause-581407"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.541608    1294 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-581407\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.541746    1294 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-581407\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.541840    1294 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-581407\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.653628    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-581407\" is forbidden: User \"system:node:pause-581407\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" podUID="463d758dafa1340c8fbb795faadc1d16" pod="kube-system/kube-apiserver-pause-581407"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.679309    1294 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-581407\" is forbidden: User \"system:node:pause-581407\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-581407' and this object" podUID="8e6dfbfb3deed9f4c9553aa21d451053" pod="kube-system/kube-scheduler-pause-581407"
	Oct 18 13:17:41 pause-581407 kubelet[1294]: E1018 13:17:41.696114    1294 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 18 13:17:41 pause-581407 kubelet[1294]:         pods "kindnet-8jjd5" is forbidden: User "system:node:pause-581407" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-581407' and this object
	Oct 18 13:17:41 pause-581407 kubelet[1294]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Oct 18 13:17:41 pause-581407 kubelet[1294]:  > podUID="ce4339d5-6ec9-44ba-891a-207552e6e2d8" pod="kube-system/kindnet-8jjd5"
	Oct 18 13:17:47 pause-581407 kubelet[1294]: W1018 13:17:47.214410    1294 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 13:17:50 pause-581407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:17:50 pause-581407 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:17:50 pause-581407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-581407 -n pause-581407
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-581407 -n pause-581407: exit status 2 (443.951164ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-581407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.999179ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:20:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-460322 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-460322 describe deploy/metrics-server -n kube-system: exit status 1 (88.833072ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-460322 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-460322
helpers_test.go:243: (dbg) docker inspect old-k8s-version-460322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884",
	        "Created": "2025-10-18T13:19:47.412981498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1014870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:19:47.476844089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/hostname",
	        "HostsPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/hosts",
	        "LogPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884-json.log",
	        "Name": "/old-k8s-version-460322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-460322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-460322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884",
	                "LowerDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-460322",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-460322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-460322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-460322",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-460322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b6783446187d44556a85c2fa4e3ce96955a824fbda8625313167817500b7e06",
	            "SandboxKey": "/var/run/docker/netns/7b6783446187",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34157"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34158"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34161"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34159"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34160"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-460322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:f2:b0:8d:c3:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b3865b19e7ef0c5515b69409531de50dd7d3b36c97ad0e3b63e293f7d29b30d",
	                    "EndpointID": "6dec7a513869bf7f4344110b27cd8c86721a22181223611be2ae98f63ee58887",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-460322",
	                        "a47757ca4663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-460322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-460322 logs -n 25: (1.247016096s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-633218 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo containerd config dump                                                                                                                                                                                                  │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo crio config                                                                                                                                                                                                             │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ delete  │ -p cilium-633218                                                                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p force-systemd-env-914730 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ force-systemd-flag-882807 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ delete  │ -p force-systemd-flag-882807                                                                                                                                                                                                                  │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-076887    │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p force-systemd-env-914730                                                                                                                                                                                                                   │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ cert-options-179041 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ -p cert-options-179041 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:19:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:19:40.863534 1014478 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:19:40.863773 1014478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:19:40.863786 1014478 out.go:374] Setting ErrFile to fd 2...
	I1018 13:19:40.863792 1014478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:19:40.864082 1014478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:19:40.864541 1014478 out.go:368] Setting JSON to false
	I1018 13:19:40.865530 1014478 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18133,"bootTime":1760775448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:19:40.865599 1014478 start.go:141] virtualization:  
	I1018 13:19:40.869079 1014478 out.go:179] * [old-k8s-version-460322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:19:40.873132 1014478 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:19:40.873245 1014478 notify.go:220] Checking for updates...
	I1018 13:19:40.880125 1014478 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:19:40.883285 1014478 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:19:40.886259 1014478 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:19:40.889314 1014478 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:19:40.892229 1014478 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:19:40.895734 1014478 config.go:182] Loaded profile config "cert-expiration-076887": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:19:40.895848 1014478 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:19:40.921968 1014478 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:19:40.922113 1014478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:19:40.982155 1014478 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:19:40.972623535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:19:40.982268 1014478 docker.go:318] overlay module found
	I1018 13:19:40.985467 1014478 out.go:179] * Using the docker driver based on user configuration
	I1018 13:19:40.988432 1014478 start.go:305] selected driver: docker
	I1018 13:19:40.988457 1014478 start.go:925] validating driver "docker" against <nil>
	I1018 13:19:40.988472 1014478 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:19:40.989297 1014478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:19:41.057076 1014478 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:19:41.047538896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:19:41.057235 1014478 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 13:19:41.057489 1014478 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:19:41.060436 1014478 out.go:179] * Using Docker driver with root privileges
	I1018 13:19:41.063337 1014478 cni.go:84] Creating CNI manager for ""
	I1018 13:19:41.063416 1014478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:19:41.063435 1014478 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:19:41.063514 1014478 start.go:349] cluster config:
	{Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:19:41.068508 1014478 out.go:179] * Starting "old-k8s-version-460322" primary control-plane node in "old-k8s-version-460322" cluster
	I1018 13:19:41.071340 1014478 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:19:41.074343 1014478 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:19:41.077217 1014478 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:19:41.077278 1014478 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 13:19:41.077290 1014478 cache.go:58] Caching tarball of preloaded images
	I1018 13:19:41.077297 1014478 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:19:41.077382 1014478 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:19:41.077392 1014478 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 13:19:41.077503 1014478 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json ...
	I1018 13:19:41.077529 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json: {Name:mk4d40b83e8ee2b7e3e18417229a83966992b652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:41.097693 1014478 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:19:41.097721 1014478 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:19:41.097740 1014478 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:19:41.097774 1014478 start.go:360] acquireMachinesLock for old-k8s-version-460322: {Name:mk920abd4332d87bf804859db37de89666f5b2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:19:41.097889 1014478 start.go:364] duration metric: took 94.598µs to acquireMachinesLock for "old-k8s-version-460322"
	I1018 13:19:41.097917 1014478 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:19:41.097994 1014478 start.go:125] createHost starting for "" (driver="docker")
	I1018 13:19:41.101294 1014478 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:19:41.101508 1014478 start.go:159] libmachine.API.Create for "old-k8s-version-460322" (driver="docker")
	I1018 13:19:41.101552 1014478 client.go:168] LocalClient.Create starting
	I1018 13:19:41.101622 1014478 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:19:41.101665 1014478 main.go:141] libmachine: Decoding PEM data...
	I1018 13:19:41.101686 1014478 main.go:141] libmachine: Parsing certificate...
	I1018 13:19:41.101744 1014478 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:19:41.101768 1014478 main.go:141] libmachine: Decoding PEM data...
	I1018 13:19:41.101781 1014478 main.go:141] libmachine: Parsing certificate...
	I1018 13:19:41.102137 1014478 cli_runner.go:164] Run: docker network inspect old-k8s-version-460322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:19:41.118377 1014478 cli_runner.go:211] docker network inspect old-k8s-version-460322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:19:41.118460 1014478 network_create.go:284] running [docker network inspect old-k8s-version-460322] to gather additional debugging logs...
	I1018 13:19:41.118480 1014478 cli_runner.go:164] Run: docker network inspect old-k8s-version-460322
	W1018 13:19:41.135374 1014478 cli_runner.go:211] docker network inspect old-k8s-version-460322 returned with exit code 1
	I1018 13:19:41.135405 1014478 network_create.go:287] error running [docker network inspect old-k8s-version-460322]: docker network inspect old-k8s-version-460322: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-460322 not found
	I1018 13:19:41.135418 1014478 network_create.go:289] output of [docker network inspect old-k8s-version-460322]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-460322 not found
	
	** /stderr **
	I1018 13:19:41.135532 1014478 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:19:41.152012 1014478 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:19:41.152413 1014478 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:19:41.152652 1014478 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:19:41.152927 1014478 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-30b55a9e8dbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:38:91:ed:1b:fa} reservation:<nil>}
	I1018 13:19:41.153385 1014478 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a32540}
	I1018 13:19:41.153406 1014478 network_create.go:124] attempt to create docker network old-k8s-version-460322 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 13:19:41.153469 1014478 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-460322 old-k8s-version-460322
	I1018 13:19:41.215056 1014478 network_create.go:108] docker network old-k8s-version-460322 192.168.85.0/24 created
	I1018 13:19:41.215089 1014478 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-460322" container
	I1018 13:19:41.215179 1014478 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:19:41.232648 1014478 cli_runner.go:164] Run: docker volume create old-k8s-version-460322 --label name.minikube.sigs.k8s.io=old-k8s-version-460322 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:19:41.250913 1014478 oci.go:103] Successfully created a docker volume old-k8s-version-460322
	I1018 13:19:41.251010 1014478 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-460322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-460322 --entrypoint /usr/bin/test -v old-k8s-version-460322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:19:41.793133 1014478 oci.go:107] Successfully prepared a docker volume old-k8s-version-460322
	I1018 13:19:41.793188 1014478 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:19:41.793207 1014478 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:19:41.793278 1014478 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-460322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 13:19:47.331175 1014478 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-460322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.537855354s)
	I1018 13:19:47.331207 1014478 kic.go:203] duration metric: took 5.537996229s to extract preloaded images to volume ...
	W1018 13:19:47.331385 1014478 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:19:47.331499 1014478 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:19:47.395405 1014478 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-460322 --name old-k8s-version-460322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-460322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-460322 --network old-k8s-version-460322 --ip 192.168.85.2 --volume old-k8s-version-460322:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:19:47.720724 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Running}}
	I1018 13:19:47.742567 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:19:47.769586 1014478 cli_runner.go:164] Run: docker exec old-k8s-version-460322 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:19:47.822821 1014478 oci.go:144] the created container "old-k8s-version-460322" has a running status.
	I1018 13:19:47.822848 1014478 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa...
	I1018 13:19:47.917215 1014478 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:19:47.941708 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:19:47.962498 1014478 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:19:47.962519 1014478 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-460322 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:19:48.020168 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:19:48.045463 1014478 machine.go:93] provisionDockerMachine start ...
	I1018 13:19:48.045569 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:48.089379 1014478 main.go:141] libmachine: Using SSH client type: native
	I1018 13:19:48.089728 1014478 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1018 13:19:48.089745 1014478 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:19:48.095850 1014478 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:19:51.247722 1014478 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460322
	
	I1018 13:19:51.247749 1014478 ubuntu.go:182] provisioning hostname "old-k8s-version-460322"
	I1018 13:19:51.247813 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:51.265182 1014478 main.go:141] libmachine: Using SSH client type: native
	I1018 13:19:51.265485 1014478 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1018 13:19:51.265497 1014478 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-460322 && echo "old-k8s-version-460322" | sudo tee /etc/hostname
	I1018 13:19:51.421596 1014478 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460322
	
	I1018 13:19:51.421678 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:51.440700 1014478 main.go:141] libmachine: Using SSH client type: native
	I1018 13:19:51.441127 1014478 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1018 13:19:51.441148 1014478 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-460322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-460322/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-460322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:19:51.596499 1014478 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:19:51.596525 1014478 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:19:51.596546 1014478 ubuntu.go:190] setting up certificates
	I1018 13:19:51.596556 1014478 provision.go:84] configureAuth start
	I1018 13:19:51.596617 1014478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:19:51.615586 1014478 provision.go:143] copyHostCerts
	I1018 13:19:51.615860 1014478 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:19:51.615891 1014478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:19:51.615982 1014478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:19:51.616093 1014478 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:19:51.616106 1014478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:19:51.616134 1014478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:19:51.616190 1014478 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:19:51.616198 1014478 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:19:51.616221 1014478 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:19:51.616271 1014478 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-460322 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-460322]
	I1018 13:19:52.191218 1014478 provision.go:177] copyRemoteCerts
	I1018 13:19:52.191292 1014478 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:19:52.191335 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:52.209166 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:19:52.311390 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 13:19:52.329682 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:19:52.347995 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:19:52.366135 1014478 provision.go:87] duration metric: took 769.555942ms to configureAuth
	I1018 13:19:52.366164 1014478 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:19:52.366349 1014478 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:19:52.366466 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:52.383830 1014478 main.go:141] libmachine: Using SSH client type: native
	I1018 13:19:52.384170 1014478 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34157 <nil> <nil>}
	I1018 13:19:52.384199 1014478 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:19:52.654653 1014478 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:19:52.654680 1014478 machine.go:96] duration metric: took 4.609194756s to provisionDockerMachine
	I1018 13:19:52.654690 1014478 client.go:171] duration metric: took 11.553129425s to LocalClient.Create
	I1018 13:19:52.654710 1014478 start.go:167] duration metric: took 11.553202976s to libmachine.API.Create "old-k8s-version-460322"
	I1018 13:19:52.654718 1014478 start.go:293] postStartSetup for "old-k8s-version-460322" (driver="docker")
	I1018 13:19:52.654732 1014478 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:19:52.654826 1014478 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:19:52.654871 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:52.672379 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:19:52.779713 1014478 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:19:52.782937 1014478 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:19:52.782964 1014478 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:19:52.782975 1014478 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:19:52.783030 1014478 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:19:52.783114 1014478 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:19:52.783225 1014478 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:19:52.790687 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:19:52.807779 1014478 start.go:296] duration metric: took 153.04572ms for postStartSetup
	I1018 13:19:52.808164 1014478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:19:52.826176 1014478 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json ...
	I1018 13:19:52.826560 1014478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:19:52.826673 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:52.844996 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:19:52.948866 1014478 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:19:52.953851 1014478 start.go:128] duration metric: took 11.855841601s to createHost
	I1018 13:19:52.953874 1014478 start.go:83] releasing machines lock for "old-k8s-version-460322", held for 11.855974033s
	I1018 13:19:52.953952 1014478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:19:52.970470 1014478 ssh_runner.go:195] Run: cat /version.json
	I1018 13:19:52.970533 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:52.970588 1014478 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:19:52.970671 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:19:52.990633 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:19:52.993236 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:19:53.095532 1014478 ssh_runner.go:195] Run: systemctl --version
	I1018 13:19:53.196975 1014478 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:19:53.233286 1014478 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:19:53.237596 1014478 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:19:53.237689 1014478 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:19:53.266844 1014478 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:19:53.266915 1014478 start.go:495] detecting cgroup driver to use...
	I1018 13:19:53.266963 1014478 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:19:53.267040 1014478 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:19:53.285065 1014478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:19:53.298140 1014478 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:19:53.298248 1014478 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:19:53.315994 1014478 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:19:53.338035 1014478 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:19:53.466273 1014478 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:19:53.602122 1014478 docker.go:234] disabling docker service ...
	I1018 13:19:53.602245 1014478 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:19:53.624665 1014478 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:19:53.639434 1014478 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:19:53.770722 1014478 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:19:53.898266 1014478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:19:53.911258 1014478 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:19:53.926610 1014478 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 13:19:53.926696 1014478 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:53.935888 1014478 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:19:53.936005 1014478 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:53.946076 1014478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:53.956647 1014478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:53.968467 1014478 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:19:53.977864 1014478 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:53.987802 1014478 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:54.005362 1014478 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:19:54.017984 1014478 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:19:54.026439 1014478 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:19:54.034569 1014478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:19:54.159523 1014478 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:19:54.297016 1014478 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:19:54.297094 1014478 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:19:54.301225 1014478 start.go:563] Will wait 60s for crictl version
	I1018 13:19:54.301295 1014478 ssh_runner.go:195] Run: which crictl
	I1018 13:19:54.305332 1014478 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:19:54.331822 1014478 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:19:54.331909 1014478 ssh_runner.go:195] Run: crio --version
	I1018 13:19:54.363428 1014478 ssh_runner.go:195] Run: crio --version
	I1018 13:19:54.397904 1014478 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 13:19:54.400800 1014478 cli_runner.go:164] Run: docker network inspect old-k8s-version-460322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:19:54.417874 1014478 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:19:54.421698 1014478 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:19:54.432821 1014478 kubeadm.go:883] updating cluster {Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:19:54.432937 1014478 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:19:54.432993 1014478 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:19:54.474071 1014478 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:19:54.474092 1014478 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:19:54.474150 1014478 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:19:54.500664 1014478 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:19:54.500818 1014478 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:19:54.500841 1014478 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1018 13:19:54.500962 1014478 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-460322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:19:54.501086 1014478 ssh_runner.go:195] Run: crio config
	I1018 13:19:54.575516 1014478 cni.go:84] Creating CNI manager for ""
	I1018 13:19:54.575602 1014478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:19:54.575646 1014478 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:19:54.575742 1014478 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-460322 NodeName:old-k8s-version-460322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:19:54.575910 1014478 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-460322"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:19:54.576011 1014478 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 13:19:54.584282 1014478 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:19:54.584384 1014478 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:19:54.592510 1014478 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 13:19:54.606324 1014478 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:19:54.623321 1014478 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 13:19:54.639101 1014478 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:19:54.642802 1014478 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:19:54.653171 1014478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:19:54.777400 1014478 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:19:54.795169 1014478 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322 for IP: 192.168.85.2
	I1018 13:19:54.795243 1014478 certs.go:195] generating shared ca certs ...
	I1018 13:19:54.795275 1014478 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:54.795459 1014478 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:19:54.795542 1014478 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:19:54.795577 1014478 certs.go:257] generating profile certs ...
	I1018 13:19:54.795687 1014478 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.key
	I1018 13:19:54.795726 1014478 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt with IP's: []
	I1018 13:19:55.009152 1014478 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt ...
	I1018 13:19:55.009190 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: {Name:mkc2a7d29eec801caabe443d039253265f669988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:55.009423 1014478 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.key ...
	I1018 13:19:55.009434 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.key: {Name:mkc5ef0c93bd1feb1a6e61b4d1b17344cdacdb2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:55.009525 1014478 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key.449e5b3e
	I1018 13:19:55.009543 1014478 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt.449e5b3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 13:19:55.971592 1014478 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt.449e5b3e ...
	I1018 13:19:55.971626 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt.449e5b3e: {Name:mk9e5bda4659e2519f30450152175960efbdd8fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:55.971825 1014478 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key.449e5b3e ...
	I1018 13:19:55.971845 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key.449e5b3e: {Name:mk2876a58b9da121729442a7cbba32295cf90add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:55.971924 1014478 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt.449e5b3e -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt
	I1018 13:19:55.972017 1014478 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key.449e5b3e -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key
	I1018 13:19:55.972090 1014478 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key
	I1018 13:19:55.972123 1014478 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.crt with IP's: []
	I1018 13:19:56.208259 1014478 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.crt ...
	I1018 13:19:56.208291 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.crt: {Name:mk4e1cb1eb8eb9df56282b728b14586b5762ee46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:56.208479 1014478 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key ...
	I1018 13:19:56.208493 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key: {Name:mk006e300d673a53b98e4a0a29232fdb6f03b0ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:19:56.208674 1014478 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:19:56.208715 1014478 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:19:56.208728 1014478 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:19:56.208751 1014478 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:19:56.208781 1014478 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:19:56.208811 1014478 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:19:56.208857 1014478 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:19:56.209434 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:19:56.229218 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:19:56.248208 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:19:56.266950 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:19:56.286343 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 13:19:56.304519 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:19:56.322817 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:19:56.342537 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:19:56.373346 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:19:56.393149 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:19:56.412328 1014478 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:19:56.430915 1014478 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:19:56.454797 1014478 ssh_runner.go:195] Run: openssl version
	I1018 13:19:56.462129 1014478 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:19:56.472351 1014478 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:19:56.477404 1014478 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:19:56.477559 1014478 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:19:56.524548 1014478 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:19:56.533001 1014478 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:19:56.544497 1014478 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:19:56.549919 1014478 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:19:56.550030 1014478 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:19:56.592214 1014478 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:19:56.600842 1014478 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:19:56.609397 1014478 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:19:56.613261 1014478 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:19:56.613335 1014478 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:19:56.655631 1014478 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:19:56.664740 1014478 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:19:56.668526 1014478 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 13:19:56.668616 1014478 kubeadm.go:400] StartCluster: {Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:19:56.668708 1014478 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:19:56.668771 1014478 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:19:56.697272 1014478 cri.go:89] found id: ""
	I1018 13:19:56.697376 1014478 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:19:56.705839 1014478 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 13:19:56.714227 1014478 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 13:19:56.714354 1014478 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 13:19:56.722747 1014478 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 13:19:56.722769 1014478 kubeadm.go:157] found existing configuration files:
	
	I1018 13:19:56.722861 1014478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 13:19:56.731443 1014478 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 13:19:56.731566 1014478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 13:19:56.739761 1014478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 13:19:56.747898 1014478 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 13:19:56.747987 1014478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 13:19:56.756017 1014478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 13:19:56.763866 1014478 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 13:19:56.763987 1014478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 13:19:56.771592 1014478 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 13:19:56.779340 1014478 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 13:19:56.779408 1014478 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 13:19:56.787855 1014478 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 13:19:56.834394 1014478 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 13:19:56.834686 1014478 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 13:19:56.871443 1014478 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 13:19:56.871565 1014478 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 13:19:56.871671 1014478 kubeadm.go:318] OS: Linux
	I1018 13:19:56.871756 1014478 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 13:19:56.871840 1014478 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 13:19:56.871913 1014478 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 13:19:56.871996 1014478 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 13:19:56.872092 1014478 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 13:19:56.872174 1014478 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 13:19:56.872248 1014478 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 13:19:56.872329 1014478 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 13:19:56.872406 1014478 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 13:19:56.954955 1014478 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 13:19:56.955119 1014478 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 13:19:56.955248 1014478 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 13:19:57.107761 1014478 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 13:19:57.113634 1014478 out.go:252]   - Generating certificates and keys ...
	I1018 13:19:57.113796 1014478 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 13:19:57.113885 1014478 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 13:19:57.624563 1014478 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 13:19:58.233557 1014478 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 13:19:59.211238 1014478 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 13:19:59.805038 1014478 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 13:20:00.711260 1014478 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 13:20:00.711791 1014478 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-460322] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 13:20:01.231598 1014478 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 13:20:01.232012 1014478 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-460322] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 13:20:01.549944 1014478 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 13:20:01.911092 1014478 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 13:20:02.057938 1014478 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 13:20:02.058255 1014478 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 13:20:02.668312 1014478 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 13:20:03.532021 1014478 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 13:20:03.697377 1014478 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 13:20:04.810814 1014478 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 13:20:04.811644 1014478 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 13:20:04.814547 1014478 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 13:20:04.818439 1014478 out.go:252]   - Booting up control plane ...
	I1018 13:20:04.818553 1014478 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 13:20:04.818646 1014478 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 13:20:04.818717 1014478 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 13:20:04.836129 1014478 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 13:20:04.836265 1014478 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 13:20:04.836313 1014478 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 13:20:04.970338 1014478 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 13:20:12.974708 1014478 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.006079 seconds
	I1018 13:20:12.974843 1014478 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 13:20:12.990614 1014478 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 13:20:13.528273 1014478 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 13:20:13.528490 1014478 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-460322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 13:20:14.042977 1014478 kubeadm.go:318] [bootstrap-token] Using token: tm8nxe.k4ido4km5afvs3qq
	I1018 13:20:14.045998 1014478 out.go:252]   - Configuring RBAC rules ...
	I1018 13:20:14.046143 1014478 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 13:20:14.053452 1014478 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 13:20:14.062373 1014478 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 13:20:14.067043 1014478 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 13:20:14.073499 1014478 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 13:20:14.079817 1014478 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 13:20:14.100050 1014478 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 13:20:14.443176 1014478 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 13:20:14.481762 1014478 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 13:20:14.482796 1014478 kubeadm.go:318] 
	I1018 13:20:14.482871 1014478 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 13:20:14.482877 1014478 kubeadm.go:318] 
	I1018 13:20:14.482968 1014478 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 13:20:14.482974 1014478 kubeadm.go:318] 
	I1018 13:20:14.483000 1014478 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 13:20:14.483062 1014478 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 13:20:14.483121 1014478 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 13:20:14.483127 1014478 kubeadm.go:318] 
	I1018 13:20:14.483183 1014478 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 13:20:14.483188 1014478 kubeadm.go:318] 
	I1018 13:20:14.483240 1014478 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 13:20:14.483244 1014478 kubeadm.go:318] 
	I1018 13:20:14.483304 1014478 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 13:20:14.483384 1014478 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 13:20:14.483456 1014478 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 13:20:14.483460 1014478 kubeadm.go:318] 
	I1018 13:20:14.483548 1014478 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 13:20:14.483646 1014478 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 13:20:14.483701 1014478 kubeadm.go:318] 
	I1018 13:20:14.483792 1014478 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tm8nxe.k4ido4km5afvs3qq \
	I1018 13:20:14.483900 1014478 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 13:20:14.483922 1014478 kubeadm.go:318] 	--control-plane 
	I1018 13:20:14.483927 1014478 kubeadm.go:318] 
	I1018 13:20:14.484016 1014478 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 13:20:14.484020 1014478 kubeadm.go:318] 
	I1018 13:20:14.484118 1014478 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tm8nxe.k4ido4km5afvs3qq \
	I1018 13:20:14.484226 1014478 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 13:20:14.492541 1014478 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 13:20:14.492671 1014478 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 13:20:14.492692 1014478 cni.go:84] Creating CNI manager for ""
	I1018 13:20:14.492699 1014478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:20:14.495836 1014478 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 13:20:14.498651 1014478 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 13:20:14.507911 1014478 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1018 13:20:14.507934 1014478 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 13:20:14.538410 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 13:20:15.534148 1014478 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 13:20:15.534280 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:15.534349 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-460322 minikube.k8s.io/updated_at=2025_10_18T13_20_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=old-k8s-version-460322 minikube.k8s.io/primary=true
	I1018 13:20:15.685808 1014478 ops.go:34] apiserver oom_adj: -16
	I1018 13:20:15.685921 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:16.186464 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:16.686893 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:17.186549 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:17.686924 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:18.186080 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:18.686428 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:19.186035 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:19.686070 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:20.186194 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:20.686691 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:21.186669 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:21.686062 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:22.186270 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:22.686642 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:23.187010 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:23.686494 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:24.186107 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:24.686093 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:25.186071 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:25.686784 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:26.186821 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:26.686225 1014478 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:20:26.830912 1014478 kubeadm.go:1113] duration metric: took 11.29667737s to wait for elevateKubeSystemPrivileges
	I1018 13:20:26.830945 1014478 kubeadm.go:402] duration metric: took 30.162333483s to StartCluster
	I1018 13:20:26.830964 1014478 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:20:26.831028 1014478 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:20:26.832112 1014478 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:20:26.832358 1014478 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:20:26.832498 1014478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 13:20:26.832764 1014478 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:20:26.832809 1014478 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:20:26.832872 1014478 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-460322"
	I1018 13:20:26.832893 1014478 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-460322"
	I1018 13:20:26.832921 1014478 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:20:26.833603 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:20:26.833956 1014478 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-460322"
	I1018 13:20:26.833986 1014478 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-460322"
	I1018 13:20:26.834259 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:20:26.835935 1014478 out.go:179] * Verifying Kubernetes components...
	I1018 13:20:26.839494 1014478 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:20:26.876076 1014478 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-460322"
	I1018 13:20:26.876128 1014478 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:20:26.876589 1014478 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:20:26.891763 1014478 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:20:26.894647 1014478 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:20:26.894668 1014478 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:20:26.894739 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:20:26.908435 1014478 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:20:26.908474 1014478 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:20:26.908560 1014478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:20:26.931801 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:20:26.951931 1014478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34157 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:20:27.311382 1014478 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:20:27.328390 1014478 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 13:20:27.328556 1014478 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:20:27.386628 1014478 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:20:28.255369 1014478 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 13:20:28.257736 1014478 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-460322" to be "Ready" ...
	I1018 13:20:28.657468 1014478 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.27076948s)
	I1018 13:20:28.661131 1014478 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 13:20:28.664211 1014478 addons.go:514] duration metric: took 1.831363211s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 13:20:28.762024 1014478 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-460322" context rescaled to 1 replicas
	W1018 13:20:30.261882 1014478 node_ready.go:57] node "old-k8s-version-460322" has "Ready":"False" status (will retry)
	W1018 13:20:32.262307 1014478 node_ready.go:57] node "old-k8s-version-460322" has "Ready":"False" status (will retry)
	W1018 13:20:34.761327 1014478 node_ready.go:57] node "old-k8s-version-460322" has "Ready":"False" status (will retry)
	W1018 13:20:37.261172 1014478 node_ready.go:57] node "old-k8s-version-460322" has "Ready":"False" status (will retry)
	W1018 13:20:39.261442 1014478 node_ready.go:57] node "old-k8s-version-460322" has "Ready":"False" status (will retry)
	W1018 13:20:41.261733 1014478 node_ready.go:57] node "old-k8s-version-460322" has "Ready":"False" status (will retry)
	I1018 13:20:41.764726 1014478 node_ready.go:49] node "old-k8s-version-460322" is "Ready"
	I1018 13:20:41.764752 1014478 node_ready.go:38] duration metric: took 13.506980135s for node "old-k8s-version-460322" to be "Ready" ...
	I1018 13:20:41.764765 1014478 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:20:41.764828 1014478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:20:41.781200 1014478 api_server.go:72] duration metric: took 14.948803725s to wait for apiserver process to appear ...
	I1018 13:20:41.781223 1014478 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:20:41.781242 1014478 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:20:41.794254 1014478 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 13:20:41.796604 1014478 api_server.go:141] control plane version: v1.28.0
	I1018 13:20:41.796678 1014478 api_server.go:131] duration metric: took 15.447166ms to wait for apiserver health ...
	I1018 13:20:41.796704 1014478 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:20:41.801981 1014478 system_pods.go:59] 8 kube-system pods found
	I1018 13:20:41.802075 1014478 system_pods.go:61] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:20:41.802100 1014478 system_pods.go:61] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running
	I1018 13:20:41.802142 1014478 system_pods.go:61] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:20:41.802171 1014478 system_pods.go:61] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running
	I1018 13:20:41.802193 1014478 system_pods.go:61] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running
	I1018 13:20:41.802214 1014478 system_pods.go:61] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:20:41.802246 1014478 system_pods.go:61] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running
	I1018 13:20:41.802272 1014478 system_pods.go:61] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:20:41.802294 1014478 system_pods.go:74] duration metric: took 5.550738ms to wait for pod list to return data ...
	I1018 13:20:41.802315 1014478 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:20:41.809157 1014478 default_sa.go:45] found service account: "default"
	I1018 13:20:41.809228 1014478 default_sa.go:55] duration metric: took 6.890779ms for default service account to be created ...
	I1018 13:20:41.809253 1014478 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:20:41.813259 1014478 system_pods.go:86] 8 kube-system pods found
	I1018 13:20:41.813347 1014478 system_pods.go:89] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:20:41.813369 1014478 system_pods.go:89] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running
	I1018 13:20:41.813410 1014478 system_pods.go:89] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:20:41.813433 1014478 system_pods.go:89] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running
	I1018 13:20:41.813454 1014478 system_pods.go:89] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running
	I1018 13:20:41.813475 1014478 system_pods.go:89] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:20:41.813496 1014478 system_pods.go:89] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running
	I1018 13:20:41.813531 1014478 system_pods.go:89] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:20:41.813569 1014478 retry.go:31] will retry after 277.798232ms: missing components: kube-dns
	I1018 13:20:42.097149 1014478 system_pods.go:86] 8 kube-system pods found
	I1018 13:20:42.097189 1014478 system_pods.go:89] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:20:42.097198 1014478 system_pods.go:89] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running
	I1018 13:20:42.097204 1014478 system_pods.go:89] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:20:42.097210 1014478 system_pods.go:89] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running
	I1018 13:20:42.097215 1014478 system_pods.go:89] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running
	I1018 13:20:42.097219 1014478 system_pods.go:89] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:20:42.097224 1014478 system_pods.go:89] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running
	I1018 13:20:42.097229 1014478 system_pods.go:89] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Running
	I1018 13:20:42.097238 1014478 system_pods.go:126] duration metric: took 287.965471ms to wait for k8s-apps to be running ...
	I1018 13:20:42.097246 1014478 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:20:42.097312 1014478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:20:42.115699 1014478 system_svc.go:56] duration metric: took 18.396305ms WaitForService to wait for kubelet
	I1018 13:20:42.115741 1014478 kubeadm.go:586] duration metric: took 15.283348053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:20:42.115766 1014478 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:20:42.120281 1014478 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:20:42.120327 1014478 node_conditions.go:123] node cpu capacity is 2
	I1018 13:20:42.120342 1014478 node_conditions.go:105] duration metric: took 4.568255ms to run NodePressure ...
	I1018 13:20:42.120354 1014478 start.go:241] waiting for startup goroutines ...
	I1018 13:20:42.120372 1014478 start.go:246] waiting for cluster config update ...
	I1018 13:20:42.120384 1014478 start.go:255] writing updated cluster config ...
	I1018 13:20:42.120728 1014478 ssh_runner.go:195] Run: rm -f paused
	I1018 13:20:42.125753 1014478 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:20:42.132307 1014478 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lqv5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.138458 1014478 pod_ready.go:94] pod "coredns-5dd5756b68-lqv5k" is "Ready"
	I1018 13:20:43.138487 1014478 pod_ready.go:86] duration metric: took 1.006138132s for pod "coredns-5dd5756b68-lqv5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.141744 1014478 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.146740 1014478 pod_ready.go:94] pod "etcd-old-k8s-version-460322" is "Ready"
	I1018 13:20:43.146771 1014478 pod_ready.go:86] duration metric: took 4.995557ms for pod "etcd-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.149903 1014478 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.155193 1014478 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-460322" is "Ready"
	I1018 13:20:43.155227 1014478 pod_ready.go:86] duration metric: took 5.295827ms for pod "kube-apiserver-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.158532 1014478 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.336155 1014478 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-460322" is "Ready"
	I1018 13:20:43.336185 1014478 pod_ready.go:86] duration metric: took 177.626602ms for pod "kube-controller-manager-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.544476 1014478 pod_ready.go:83] waiting for pod "kube-proxy-r24jz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:43.936635 1014478 pod_ready.go:94] pod "kube-proxy-r24jz" is "Ready"
	I1018 13:20:43.936675 1014478 pod_ready.go:86] duration metric: took 392.11573ms for pod "kube-proxy-r24jz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:44.136760 1014478 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:44.536604 1014478 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-460322" is "Ready"
	I1018 13:20:44.536631 1014478 pod_ready.go:86] duration metric: took 399.839329ms for pod "kube-scheduler-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:20:44.536643 1014478 pod_ready.go:40] duration metric: took 2.410852074s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:20:44.595851 1014478 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 13:20:44.599187 1014478 out.go:203] 
	W1018 13:20:44.602016 1014478 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 13:20:44.604934 1014478 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 13:20:44.608788 1014478 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-460322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 13:20:41 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:41.750960035Z" level=info msg="Created container ebf8b50eb54ab767475dbbc524a0f885d47b3bb88dfb0b18398c70bf7a59b128: kube-system/coredns-5dd5756b68-lqv5k/coredns" id=b6e78b94-27d6-4705-ba7d-bfa7060c529c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:20:41 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:41.751812556Z" level=info msg="Starting container: ebf8b50eb54ab767475dbbc524a0f885d47b3bb88dfb0b18398c70bf7a59b128" id=e73585b5-f8d6-413c-843e-762c3d5c9d98 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:20:41 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:41.753710986Z" level=info msg="Started container" PID=1917 containerID=ebf8b50eb54ab767475dbbc524a0f885d47b3bb88dfb0b18398c70bf7a59b128 description=kube-system/coredns-5dd5756b68-lqv5k/coredns id=e73585b5-f8d6-413c-843e-762c3d5c9d98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e4cdfbd74ae861c2fcaa6c3d0f606fda8e87e2d77c837e668067ed062b9905ad
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.133179903Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ca56e018-ab9b-47b7-8216-f8c5edf70a15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.133275223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.180375012Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab UID:1c6bf5f2-e479-4cab-8117-2ce11ae04d08 NetNS:/var/run/netns/f7104278-93e4-47f8-9efe-1548f008450f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400267aa90}] Aliases:map[]}"
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.180637718Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.205760116Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab UID:1c6bf5f2-e479-4cab-8117-2ce11ae04d08 NetNS:/var/run/netns/f7104278-93e4-47f8-9efe-1548f008450f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400267aa90}] Aliases:map[]}"
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.208722998Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.22028317Z" level=info msg="Ran pod sandbox fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab with infra container: default/busybox/POD" id=ca56e018-ab9b-47b7-8216-f8c5edf70a15 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.221812604Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c987269-7109-4c2b-a07f-253becc56868 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.222074218Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8c987269-7109-4c2b-a07f-253becc56868 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.222147417Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8c987269-7109-4c2b-a07f-253becc56868 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.230695656Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1aa7ddd-3a9f-4148-a399-206b32def7c7 name=/runtime.v1.ImageService/PullImage
	Oct 18 13:20:45 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:45.245660905Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.325216872Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b1aa7ddd-3a9f-4148-a399-206b32def7c7 name=/runtime.v1.ImageService/PullImage
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.326202456Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bbe8456d-be1d-4360-b1a3-939a22951729 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.328838132Z" level=info msg="Creating container: default/busybox/busybox" id=1de6da12-ba85-445e-af38-44187218dc89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.329612318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.335089669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.335564684Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.350606931Z" level=info msg="Created container f0e0f1f253343bda0f5043ef826c838322fb40cd728672f0f3ec6cd4108ad680: default/busybox/busybox" id=1de6da12-ba85-445e-af38-44187218dc89 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.351555756Z" level=info msg="Starting container: f0e0f1f253343bda0f5043ef826c838322fb40cd728672f0f3ec6cd4108ad680" id=b88ef636-0f6a-4f87-875d-e250ed01d4c1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:20:47 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:47.354908419Z" level=info msg="Started container" PID=1979 containerID=f0e0f1f253343bda0f5043ef826c838322fb40cd728672f0f3ec6cd4108ad680 description=default/busybox/busybox id=b88ef636-0f6a-4f87-875d-e250ed01d4c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab
	Oct 18 13:20:54 old-k8s-version-460322 crio[835]: time="2025-10-18T13:20:54.997784322Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	f0e0f1f253343       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   fd331c1352760       busybox                                          default
	ebf8b50eb54ab       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   e4cdfbd74ae86       coredns-5dd5756b68-lqv5k                         kube-system
	9b5a89bef5fc9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   b3b549fc06a23       storage-provisioner                              kube-system
	07a825456f0fd       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   8aabdb84139fc       kindnet-q2sfv                                    kube-system
	1e1fe5b05b3fa       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      29 seconds ago      Running             kube-proxy                0                   c5fa3d2f262c5       kube-proxy-r24jz                                 kube-system
	2472f4951c572       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      49 seconds ago      Running             kube-controller-manager   0                   007de7bb08622       kube-controller-manager-old-k8s-version-460322   kube-system
	c37e83ad8f5d2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      49 seconds ago      Running             etcd                      0                   ed27a48a18efb       etcd-old-k8s-version-460322                      kube-system
	0c6c8e955bfb1       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      49 seconds ago      Running             kube-apiserver            0                   437173f3cec6a       kube-apiserver-old-k8s-version-460322            kube-system
	795942a772417       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      49 seconds ago      Running             kube-scheduler            0                   db792efc6c61a       kube-scheduler-old-k8s-version-460322            kube-system
	
	
	==> coredns [ebf8b50eb54ab767475dbbc524a0f885d47b3bb88dfb0b18398c70bf7a59b128] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47996 - 31870 "HINFO IN 5137940657277159556.5969783498240625594. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022495558s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-460322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-460322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=old-k8s-version-460322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_20_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:20:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-460322
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:20:45 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:20:45 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:20:45 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:20:45 +0000   Sat, 18 Oct 2025 13:20:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-460322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                08120b82-a464-4f81-9944-a22a9025117c
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-lqv5k                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-460322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-q2sfv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-460322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-460322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-r24jz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-460322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-460322 event: Registered Node old-k8s-version-460322 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-460322 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 12:51] overlayfs: idmapped layers are currently not supported
	[Oct18 12:53] overlayfs: idmapped layers are currently not supported
	[Oct18 12:57] overlayfs: idmapped layers are currently not supported
	[Oct18 12:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c37e83ad8f5d28a71dcb77c0bc4f845a8a21a56a0b52ff4f79042ff20ca09ab2] <==
	{"level":"info","ts":"2025-10-18T13:20:07.168964Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T13:20:07.167001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-18T13:20:07.167047Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T13:20:07.168809Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T13:20:07.169095Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T13:20:07.169268Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-18T13:20:07.169565Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T13:20:07.44372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-18T13:20:07.443827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-18T13:20:07.443886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-18T13:20:07.443931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-18T13:20:07.443967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-18T13:20:07.444009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-18T13:20:07.444048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-18T13:20:07.445289Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-460322 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T13:20:07.445361Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T13:20:07.446368Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-18T13:20:07.454065Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:20:07.4546Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T13:20:07.455223Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:20:07.459733Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:20:07.459816Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:20:07.460274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T13:20:07.460319Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T13:20:07.461476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:20:56 up  5:03,  0 user,  load average: 3.21, 3.21, 2.39
	Linux old-k8s-version-460322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07a825456f0fd50f5e773fbc0a301ade8eb6cf8952f80829ea337e217ac0e82d] <==
	I1018 13:20:30.514657       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:20:30.515013       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:20:30.515178       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:20:30.515217       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:20:30.515254       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:20:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:20:30.808877       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:20:30.808969       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:20:30.809040       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:20:30.809387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 13:20:31.012324       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:20:31.012353       1 metrics.go:72] Registering metrics
	I1018 13:20:31.012423       1 controller.go:711] "Syncing nftables rules"
	I1018 13:20:40.809183       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:20:40.809223       1 main.go:301] handling current node
	I1018 13:20:50.808491       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:20:50.808525       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c6c8e955bfb1663a01f97db41d6d6ac77ece4433dea93868376034467864293] <==
	I1018 13:20:11.368320       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 13:20:11.382075       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 13:20:11.388538       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 13:20:11.388673       1 aggregator.go:166] initial CRD sync complete...
	I1018 13:20:11.388713       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 13:20:11.388749       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:20:11.388781       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:20:11.428811       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:20:11.464765       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 13:20:11.465463       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 13:20:12.058311       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:20:12.063012       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:20:12.063039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:20:12.713792       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:20:12.765410       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:20:12.918290       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:20:12.925367       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 13:20:12.926608       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 13:20:12.934558       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:20:13.134127       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 13:20:14.409754       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 13:20:14.435120       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:20:14.451808       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 13:20:26.440662       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 13:20:26.684010       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2472f4951c5724f3fac5d9f5f632a3514e5a738ccf67b949e9309642145dccf9] <==
	I1018 13:20:26.119130       1 shared_informer.go:318] Caches are synced for cronjob
	I1018 13:20:26.183281       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 13:20:26.446381       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1018 13:20:26.491743       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 13:20:26.491777       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 13:20:26.524040       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 13:20:26.715515       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q2sfv"
	I1018 13:20:26.715923       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r24jz"
	I1018 13:20:27.086473       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cw5cx"
	I1018 13:20:27.148633       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lqv5k"
	I1018 13:20:27.193547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="746.73072ms"
	I1018 13:20:27.245260       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.662161ms"
	I1018 13:20:27.278435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.124973ms"
	I1018 13:20:27.279227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="683.83µs"
	I1018 13:20:28.288334       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 13:20:28.328973       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cw5cx"
	I1018 13:20:28.345347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.816309ms"
	I1018 13:20:28.356724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.330877ms"
	I1018 13:20:28.356800       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.013µs"
	I1018 13:20:41.360419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.701µs"
	I1018 13:20:41.387640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.409µs"
	I1018 13:20:41.898614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.995µs"
	I1018 13:20:42.898106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.549106ms"
	I1018 13:20:42.898354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.176µs"
	I1018 13:20:45.932168       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [1e1fe5b05b3fa7231d6ef579f241b3acdd10b3f3349d45c785e252290391edd4] <==
	I1018 13:20:27.365339       1 server_others.go:69] "Using iptables proxy"
	I1018 13:20:27.392902       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1018 13:20:27.450449       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:20:27.452363       1 server_others.go:152] "Using iptables Proxier"
	I1018 13:20:27.452451       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 13:20:27.452483       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 13:20:27.452535       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 13:20:27.452757       1 server.go:846] "Version info" version="v1.28.0"
	I1018 13:20:27.452942       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:20:27.453672       1 config.go:188] "Starting service config controller"
	I1018 13:20:27.453772       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 13:20:27.453822       1 config.go:97] "Starting endpoint slice config controller"
	I1018 13:20:27.453849       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 13:20:27.454495       1 config.go:315] "Starting node config controller"
	I1018 13:20:27.454542       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 13:20:27.553977       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 13:20:27.554035       1 shared_informer.go:318] Caches are synced for service config
	I1018 13:20:27.555060       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [795942a772417439a91a8f0e6d74d85887d5ddba66cef6d1511711231ea53b76] <==
	W1018 13:20:11.623260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 13:20:11.623300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 13:20:11.623393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 13:20:11.623432       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 13:20:11.623602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 13:20:11.623668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 13:20:11.623722       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 13:20:11.623752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 13:20:11.623792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 13:20:11.623852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 13:20:11.623966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 13:20:11.623984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 13:20:11.624021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 13:20:11.624035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 13:20:11.624090       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 13:20:11.624105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 13:20:11.624163       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 13:20:11.624180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 13:20:11.624631       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 13:20:11.624692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 13:20:11.626184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 13:20:11.626216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 13:20:12.571375       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 13:20:12.571494       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1018 13:20:14.409026       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770388    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3c6220d-2780-43fd-9d48-417fd46db4c7-lib-modules\") pod \"kindnet-q2sfv\" (UID: \"e3c6220d-2780-43fd-9d48-417fd46db4c7\") " pod="kube-system/kindnet-q2sfv"
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770418    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfv8\" (UniqueName: \"kubernetes.io/projected/e3c6220d-2780-43fd-9d48-417fd46db4c7-kube-api-access-5dfv8\") pod \"kindnet-q2sfv\" (UID: \"e3c6220d-2780-43fd-9d48-417fd46db4c7\") " pod="kube-system/kindnet-q2sfv"
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770470    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72b7b247-6c77-4feb-8734-a6cf94450421-kube-proxy\") pod \"kube-proxy-r24jz\" (UID: \"72b7b247-6c77-4feb-8734-a6cf94450421\") " pod="kube-system/kube-proxy-r24jz"
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770494    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hbkx\" (UniqueName: \"kubernetes.io/projected/72b7b247-6c77-4feb-8734-a6cf94450421-kube-api-access-5hbkx\") pod \"kube-proxy-r24jz\" (UID: \"72b7b247-6c77-4feb-8734-a6cf94450421\") " pod="kube-system/kube-proxy-r24jz"
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770587    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72b7b247-6c77-4feb-8734-a6cf94450421-lib-modules\") pod \"kube-proxy-r24jz\" (UID: \"72b7b247-6c77-4feb-8734-a6cf94450421\") " pod="kube-system/kube-proxy-r24jz"
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770643    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72b7b247-6c77-4feb-8734-a6cf94450421-xtables-lock\") pod \"kube-proxy-r24jz\" (UID: \"72b7b247-6c77-4feb-8734-a6cf94450421\") " pod="kube-system/kube-proxy-r24jz"
	Oct 18 13:20:26 old-k8s-version-460322 kubelet[1373]: I1018 13:20:26.770668    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3c6220d-2780-43fd-9d48-417fd46db4c7-xtables-lock\") pod \"kindnet-q2sfv\" (UID: \"e3c6220d-2780-43fd-9d48-417fd46db4c7\") " pod="kube-system/kindnet-q2sfv"
	Oct 18 13:20:27 old-k8s-version-460322 kubelet[1373]: W1018 13:20:27.048653    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-8aabdb84139fc9042211fd020222df189ec563cba833d2b8e349816fa2a44c34 WatchSource:0}: Error finding container 8aabdb84139fc9042211fd020222df189ec563cba833d2b8e349816fa2a44c34: Status 404 returned error can't find the container with id 8aabdb84139fc9042211fd020222df189ec563cba833d2b8e349816fa2a44c34
	Oct 18 13:20:27 old-k8s-version-460322 kubelet[1373]: W1018 13:20:27.105332    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-c5fa3d2f262c584762e5f8e7122f149a0fe2da08eba830c141f19c2a80ad8ed0 WatchSource:0}: Error finding container c5fa3d2f262c584762e5f8e7122f149a0fe2da08eba830c141f19c2a80ad8ed0: Status 404 returned error can't find the container with id c5fa3d2f262c584762e5f8e7122f149a0fe2da08eba830c141f19c2a80ad8ed0
	Oct 18 13:20:30 old-k8s-version-460322 kubelet[1373]: I1018 13:20:30.853159    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r24jz" podStartSLOduration=4.85310721 podCreationTimestamp="2025-10-18 13:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:20:27.820420283 +0000 UTC m=+13.464449148" watchObservedRunningTime="2025-10-18 13:20:30.85310721 +0000 UTC m=+16.497136067"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.324688    1373 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.359566    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-q2sfv" podStartSLOduration=12.005527093 podCreationTimestamp="2025-10-18 13:20:26 +0000 UTC" firstStartedPulling="2025-10-18 13:20:27.071386223 +0000 UTC m=+12.715415080" lastFinishedPulling="2025-10-18 13:20:30.425376041 +0000 UTC m=+16.069404898" observedRunningTime="2025-10-18 13:20:30.854911994 +0000 UTC m=+16.498940859" watchObservedRunningTime="2025-10-18 13:20:41.359516911 +0000 UTC m=+27.003545776"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.360102    1373 topology_manager.go:215] "Topology Admit Handler" podUID="2ca5efdc-f3fd-488a-90ee-6a4229383c66" podNamespace="kube-system" podName="coredns-5dd5756b68-lqv5k"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.361738    1373 topology_manager.go:215] "Topology Admit Handler" podUID="cf300c58-b4a5-43da-aaa9-2b0002ba3f8d" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.473400    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tntw\" (UniqueName: \"kubernetes.io/projected/cf300c58-b4a5-43da-aaa9-2b0002ba3f8d-kube-api-access-2tntw\") pod \"storage-provisioner\" (UID: \"cf300c58-b4a5-43da-aaa9-2b0002ba3f8d\") " pod="kube-system/storage-provisioner"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.473471    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cks9j\" (UniqueName: \"kubernetes.io/projected/2ca5efdc-f3fd-488a-90ee-6a4229383c66-kube-api-access-cks9j\") pod \"coredns-5dd5756b68-lqv5k\" (UID: \"2ca5efdc-f3fd-488a-90ee-6a4229383c66\") " pod="kube-system/coredns-5dd5756b68-lqv5k"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.473506    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ca5efdc-f3fd-488a-90ee-6a4229383c66-config-volume\") pod \"coredns-5dd5756b68-lqv5k\" (UID: \"2ca5efdc-f3fd-488a-90ee-6a4229383c66\") " pod="kube-system/coredns-5dd5756b68-lqv5k"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.473545    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf300c58-b4a5-43da-aaa9-2b0002ba3f8d-tmp\") pod \"storage-provisioner\" (UID: \"cf300c58-b4a5-43da-aaa9-2b0002ba3f8d\") " pod="kube-system/storage-provisioner"
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: W1018 13:20:41.683353    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-b3b549fc06a23c59a7d968401c5cf96ecf516db4ff17596b46c493bb3141ee7a WatchSource:0}: Error finding container b3b549fc06a23c59a7d968401c5cf96ecf516db4ff17596b46c493bb3141ee7a: Status 404 returned error can't find the container with id b3b549fc06a23c59a7d968401c5cf96ecf516db4ff17596b46c493bb3141ee7a
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: W1018 13:20:41.703161    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-e4cdfbd74ae861c2fcaa6c3d0f606fda8e87e2d77c837e668067ed062b9905ad WatchSource:0}: Error finding container e4cdfbd74ae861c2fcaa6c3d0f606fda8e87e2d77c837e668067ed062b9905ad: Status 404 returned error can't find the container with id e4cdfbd74ae861c2fcaa6c3d0f606fda8e87e2d77c837e668067ed062b9905ad
	Oct 18 13:20:41 old-k8s-version-460322 kubelet[1373]: I1018 13:20:41.896197    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.896150565 podCreationTimestamp="2025-10-18 13:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:20:41.879238425 +0000 UTC m=+27.523267282" watchObservedRunningTime="2025-10-18 13:20:41.896150565 +0000 UTC m=+27.540179430"
	Oct 18 13:20:42 old-k8s-version-460322 kubelet[1373]: I1018 13:20:42.881719    1373 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lqv5k" podStartSLOduration=15.881659124 podCreationTimestamp="2025-10-18 13:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:20:41.898644569 +0000 UTC m=+27.542673450" watchObservedRunningTime="2025-10-18 13:20:42.881659124 +0000 UTC m=+28.525688005"
	Oct 18 13:20:44 old-k8s-version-460322 kubelet[1373]: I1018 13:20:44.827245    1373 topology_manager.go:215] "Topology Admit Handler" podUID="1c6bf5f2-e479-4cab-8117-2ce11ae04d08" podNamespace="default" podName="busybox"
	Oct 18 13:20:44 old-k8s-version-460322 kubelet[1373]: I1018 13:20:44.991477    1373 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwc84\" (UniqueName: \"kubernetes.io/projected/1c6bf5f2-e479-4cab-8117-2ce11ae04d08-kube-api-access-nwc84\") pod \"busybox\" (UID: \"1c6bf5f2-e479-4cab-8117-2ce11ae04d08\") " pod="default/busybox"
	Oct 18 13:20:45 old-k8s-version-460322 kubelet[1373]: W1018 13:20:45.212805    1373 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab WatchSource:0}: Error finding container fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab: Status 404 returned error can't find the container with id fd331c13527605957a15de761acb444f80515be4b4798ce8b75e337e8f9425ab
	
	
	==> storage-provisioner [9b5a89bef5fc952e0fcfbafdb6a51a63437fe5ebc2337492c0e67035edccf3f3] <==
	I1018 13:20:41.774355       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:20:41.814680       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:20:41.814897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 13:20:41.824723       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:20:41.825015       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460322_d2d2f68f-231c-4e90-a673-347597828d73!
	I1018 13:20:41.825928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4894fd2-3668-4ade-932b-17a0a4c87466", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-460322_d2d2f68f-231c-4e90-a673-347597828d73 became leader
	I1018 13:20:41.927749       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460322_d2d2f68f-231c-4e90-a673-347597828d73!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460322 -n old-k8s-version-460322
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-460322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-460322 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-460322 --alsologtostderr -v=1: exit status 80 (1.975358392s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-460322 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:22:11.401309 1020306 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:22:11.401574 1020306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:22:11.401603 1020306 out.go:374] Setting ErrFile to fd 2...
	I1018 13:22:11.401622 1020306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:22:11.401959 1020306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:22:11.402282 1020306 out.go:368] Setting JSON to false
	I1018 13:22:11.402373 1020306 mustload.go:65] Loading cluster: old-k8s-version-460322
	I1018 13:22:11.402852 1020306 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:22:11.403469 1020306 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:22:11.421409 1020306 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:22:11.421733 1020306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:22:11.498919 1020306 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 13:22:11.489301363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:22:11.499561 1020306 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-460322 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 13:22:11.503291 1020306 out.go:179] * Pausing node old-k8s-version-460322 ... 
	I1018 13:22:11.506157 1020306 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:22:11.506520 1020306 ssh_runner.go:195] Run: systemctl --version
	I1018 13:22:11.506594 1020306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:22:11.523833 1020306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:22:11.634728 1020306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:22:11.649837 1020306 pause.go:52] kubelet running: true
	I1018 13:22:11.649915 1020306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:22:11.882863 1020306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:22:11.882960 1020306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:22:11.958024 1020306 cri.go:89] found id: "984906cf5e5334c75f8c765a6f2db0d15bb3c67c8dd26c2ea22afe57e46c2ccd"
	I1018 13:22:11.958056 1020306 cri.go:89] found id: "6733299c34fd341f383ae390c143b2befff44dd81eefe87b85616a104cb5f5b6"
	I1018 13:22:11.958062 1020306 cri.go:89] found id: "6bc8a1812064618e157047d140bb8c58f735c688349bfaef61844d1c8c1772e9"
	I1018 13:22:11.958066 1020306 cri.go:89] found id: "64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8"
	I1018 13:22:11.958069 1020306 cri.go:89] found id: "326284bdad41b74cf178475229d927879679dce262e83729e460ce45b0997281"
	I1018 13:22:11.958093 1020306 cri.go:89] found id: "9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d"
	I1018 13:22:11.958104 1020306 cri.go:89] found id: "9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4"
	I1018 13:22:11.958107 1020306 cri.go:89] found id: "ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc"
	I1018 13:22:11.958110 1020306 cri.go:89] found id: "263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8"
	I1018 13:22:11.958123 1020306 cri.go:89] found id: "fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	I1018 13:22:11.958133 1020306 cri.go:89] found id: "25cfc40476d08f879ec09d886ec981c65e17c36cf0044db936682dfbd1c11cf4"
	I1018 13:22:11.958137 1020306 cri.go:89] found id: ""
	I1018 13:22:11.958195 1020306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:22:11.969205 1020306 retry.go:31] will retry after 259.392012ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:22:11Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:22:12.229753 1020306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:22:12.242974 1020306 pause.go:52] kubelet running: false
	I1018 13:22:12.243073 1020306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:22:12.423871 1020306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:22:12.423956 1020306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:22:12.504757 1020306 cri.go:89] found id: "984906cf5e5334c75f8c765a6f2db0d15bb3c67c8dd26c2ea22afe57e46c2ccd"
	I1018 13:22:12.504786 1020306 cri.go:89] found id: "6733299c34fd341f383ae390c143b2befff44dd81eefe87b85616a104cb5f5b6"
	I1018 13:22:12.504791 1020306 cri.go:89] found id: "6bc8a1812064618e157047d140bb8c58f735c688349bfaef61844d1c8c1772e9"
	I1018 13:22:12.504795 1020306 cri.go:89] found id: "64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8"
	I1018 13:22:12.504798 1020306 cri.go:89] found id: "326284bdad41b74cf178475229d927879679dce262e83729e460ce45b0997281"
	I1018 13:22:12.504802 1020306 cri.go:89] found id: "9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d"
	I1018 13:22:12.504804 1020306 cri.go:89] found id: "9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4"
	I1018 13:22:12.504808 1020306 cri.go:89] found id: "ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc"
	I1018 13:22:12.504810 1020306 cri.go:89] found id: "263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8"
	I1018 13:22:12.504817 1020306 cri.go:89] found id: "fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	I1018 13:22:12.504820 1020306 cri.go:89] found id: "25cfc40476d08f879ec09d886ec981c65e17c36cf0044db936682dfbd1c11cf4"
	I1018 13:22:12.504823 1020306 cri.go:89] found id: ""
	I1018 13:22:12.504876 1020306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:22:12.517104 1020306 retry.go:31] will retry after 502.878592ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:22:13.020911 1020306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:22:13.035336 1020306 pause.go:52] kubelet running: false
	I1018 13:22:13.035422 1020306 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:22:13.211929 1020306 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:22:13.212038 1020306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:22:13.279555 1020306 cri.go:89] found id: "984906cf5e5334c75f8c765a6f2db0d15bb3c67c8dd26c2ea22afe57e46c2ccd"
	I1018 13:22:13.279578 1020306 cri.go:89] found id: "6733299c34fd341f383ae390c143b2befff44dd81eefe87b85616a104cb5f5b6"
	I1018 13:22:13.279583 1020306 cri.go:89] found id: "6bc8a1812064618e157047d140bb8c58f735c688349bfaef61844d1c8c1772e9"
	I1018 13:22:13.279587 1020306 cri.go:89] found id: "64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8"
	I1018 13:22:13.279590 1020306 cri.go:89] found id: "326284bdad41b74cf178475229d927879679dce262e83729e460ce45b0997281"
	I1018 13:22:13.279602 1020306 cri.go:89] found id: "9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d"
	I1018 13:22:13.279606 1020306 cri.go:89] found id: "9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4"
	I1018 13:22:13.279610 1020306 cri.go:89] found id: "ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc"
	I1018 13:22:13.279613 1020306 cri.go:89] found id: "263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8"
	I1018 13:22:13.279619 1020306 cri.go:89] found id: "fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	I1018 13:22:13.279623 1020306 cri.go:89] found id: "25cfc40476d08f879ec09d886ec981c65e17c36cf0044db936682dfbd1c11cf4"
	I1018 13:22:13.279626 1020306 cri.go:89] found id: ""
	I1018 13:22:13.279729 1020306 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:22:13.293993 1020306 out.go:203] 
	W1018 13:22:13.296971 1020306 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:22:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:22:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 13:22:13.297002 1020306 out.go:285] * 
	* 
	W1018 13:22:13.304233 1020306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 13:22:13.307713 1020306 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-460322 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-460322
helpers_test.go:243: (dbg) docker inspect old-k8s-version-460322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884",
	        "Created": "2025-10-18T13:19:47.412981498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1018194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:21:10.192427426Z",
	            "FinishedAt": "2025-10-18T13:21:09.364907518Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/hostname",
	        "HostsPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/hosts",
	        "LogPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884-json.log",
	        "Name": "/old-k8s-version-460322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-460322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-460322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884",
	                "LowerDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-460322",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-460322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-460322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-460322",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-460322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da3988b07bbb76e5d947cab83b56a67512e1923af2e5cf3bd06086ecdec25943",
	            "SandboxKey": "/var/run/docker/netns/da3988b07bbb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-460322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:8a:a6:f2:be:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b3865b19e7ef0c5515b69409531de50dd7d3b36c97ad0e3b63e293f7d29b30d",
	                    "EndpointID": "ee36fd7e068587256fa72069cd1fd6d42f2e9377f3e4a2a2478ffb68fceb7149",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-460322",
	                        "a47757ca4663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322: exit status 2 (380.455837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-460322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-460322 logs -n 25: (1.450889124s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-633218 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo containerd config dump                                                                                                                                                                                                  │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo crio config                                                                                                                                                                                                             │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ delete  │ -p cilium-633218                                                                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p force-systemd-env-914730 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ force-systemd-flag-882807 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ delete  │ -p force-systemd-flag-882807                                                                                                                                                                                                                  │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-076887    │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p force-systemd-env-914730                                                                                                                                                                                                                   │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ cert-options-179041 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ -p cert-options-179041 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	│ stop    │ -p old-k8s-version-460322 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │ 18 Oct 25 13:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:21:09
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:21:09.899883 1018066 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:21:09.900046 1018066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:21:09.900052 1018066 out.go:374] Setting ErrFile to fd 2...
	I1018 13:21:09.900084 1018066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:21:09.901020 1018066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:21:09.901710 1018066 out.go:368] Setting JSON to false
	I1018 13:21:09.902757 1018066 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18222,"bootTime":1760775448,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:21:09.902872 1018066 start.go:141] virtualization:  
	I1018 13:21:09.906044 1018066 out.go:179] * [old-k8s-version-460322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:21:09.909948 1018066 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:21:09.910074 1018066 notify.go:220] Checking for updates...
	I1018 13:21:09.916306 1018066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:21:09.919350 1018066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:21:09.922573 1018066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:21:09.925534 1018066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:21:09.928510 1018066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:21:09.932003 1018066 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:21:09.935530 1018066 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 13:21:09.938469 1018066 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:21:09.965356 1018066 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:21:09.965498 1018066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:21:10.036287 1018066 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:21:10.022333322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:21:10.036418 1018066 docker.go:318] overlay module found
	I1018 13:21:10.039799 1018066 out.go:179] * Using the docker driver based on existing profile
	I1018 13:21:10.042765 1018066 start.go:305] selected driver: docker
	I1018 13:21:10.042807 1018066 start.go:925] validating driver "docker" against &{Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:21:10.042931 1018066 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:21:10.043770 1018066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:21:10.103158 1018066 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:21:10.092541673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:21:10.103526 1018066 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:21:10.103572 1018066 cni.go:84] Creating CNI manager for ""
	I1018 13:21:10.103640 1018066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:21:10.103920 1018066 start.go:349] cluster config:
	{Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:21:10.107155 1018066 out.go:179] * Starting "old-k8s-version-460322" primary control-plane node in "old-k8s-version-460322" cluster
	I1018 13:21:10.109979 1018066 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:21:10.112909 1018066 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:21:10.115825 1018066 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:21:10.115884 1018066 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 13:21:10.115897 1018066 cache.go:58] Caching tarball of preloaded images
	I1018 13:21:10.115907 1018066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:21:10.115991 1018066 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:21:10.116002 1018066 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 13:21:10.116127 1018066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json ...
	I1018 13:21:10.135899 1018066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:21:10.135922 1018066 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:21:10.135940 1018066 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:21:10.135971 1018066 start.go:360] acquireMachinesLock for old-k8s-version-460322: {Name:mk920abd4332d87bf804859db37de89666f5b2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:21:10.136056 1018066 start.go:364] duration metric: took 62.007µs to acquireMachinesLock for "old-k8s-version-460322"
	I1018 13:21:10.136084 1018066 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:21:10.136092 1018066 fix.go:54] fixHost starting: 
	I1018 13:21:10.136355 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:10.153961 1018066 fix.go:112] recreateIfNeeded on old-k8s-version-460322: state=Stopped err=<nil>
	W1018 13:21:10.153994 1018066 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:21:10.157375 1018066 out.go:252] * Restarting existing docker container for "old-k8s-version-460322" ...
	I1018 13:21:10.157464 1018066 cli_runner.go:164] Run: docker start old-k8s-version-460322
	I1018 13:21:10.430580 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:10.452488 1018066 kic.go:430] container "old-k8s-version-460322" state is running.
	I1018 13:21:10.452891 1018066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:21:10.478780 1018066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json ...
	I1018 13:21:10.479029 1018066 machine.go:93] provisionDockerMachine start ...
	I1018 13:21:10.479100 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:10.507622 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:10.508177 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:10.508192 1018066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:21:10.508906 1018066 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:21:13.659338 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460322
	
	I1018 13:21:13.659365 1018066 ubuntu.go:182] provisioning hostname "old-k8s-version-460322"
	I1018 13:21:13.659433 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:13.678824 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:13.679148 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:13.679167 1018066 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-460322 && echo "old-k8s-version-460322" | sudo tee /etc/hostname
	I1018 13:21:13.842558 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460322
	
	I1018 13:21:13.842665 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:13.862318 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:13.862672 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:13.862695 1018066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-460322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-460322/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-460322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:21:14.016420 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:21:14.016448 1018066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:21:14.016480 1018066 ubuntu.go:190] setting up certificates
	I1018 13:21:14.016489 1018066 provision.go:84] configureAuth start
	I1018 13:21:14.016552 1018066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:21:14.035132 1018066 provision.go:143] copyHostCerts
	I1018 13:21:14.035223 1018066 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:21:14.035244 1018066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:21:14.035329 1018066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:21:14.035430 1018066 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:21:14.035436 1018066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:21:14.035461 1018066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:21:14.035511 1018066 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:21:14.035516 1018066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:21:14.035537 1018066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:21:14.035580 1018066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-460322 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-460322]
	I1018 13:21:14.640879 1018066 provision.go:177] copyRemoteCerts
	I1018 13:21:14.640997 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:21:14.641065 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:14.661783 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:14.767518 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 13:21:14.784825 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 13:21:14.802121 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:21:14.820158 1018066 provision.go:87] duration metric: took 803.654722ms to configureAuth
	I1018 13:21:14.820184 1018066 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:21:14.820381 1018066 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:21:14.820491 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:14.837689 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:14.838010 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:14.838033 1018066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:21:15.185806 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:21:15.185831 1018066 machine.go:96] duration metric: took 4.706784103s to provisionDockerMachine
	I1018 13:21:15.185842 1018066 start.go:293] postStartSetup for "old-k8s-version-460322" (driver="docker")
	I1018 13:21:15.185853 1018066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:21:15.185931 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:21:15.185983 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.208803 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.322201 1018066 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:21:15.326076 1018066 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:21:15.326106 1018066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:21:15.326122 1018066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:21:15.326181 1018066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:21:15.326268 1018066 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:21:15.326391 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:21:15.334150 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:21:15.352783 1018066 start.go:296] duration metric: took 166.926321ms for postStartSetup
	I1018 13:21:15.352875 1018066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:21:15.352926 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.370988 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.472787 1018066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:21:15.477717 1018066 fix.go:56] duration metric: took 5.341617158s for fixHost
	I1018 13:21:15.477745 1018066 start.go:83] releasing machines lock for "old-k8s-version-460322", held for 5.341674923s
	I1018 13:21:15.477817 1018066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:21:15.494389 1018066 ssh_runner.go:195] Run: cat /version.json
	I1018 13:21:15.494456 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.494710 1018066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:21:15.494780 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.520069 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.523994 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.720747 1018066 ssh_runner.go:195] Run: systemctl --version
	I1018 13:21:15.727581 1018066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:21:15.765130 1018066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:21:15.769580 1018066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:21:15.769655 1018066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:21:15.777985 1018066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:21:15.778013 1018066 start.go:495] detecting cgroup driver to use...
	I1018 13:21:15.778049 1018066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:21:15.778104 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:21:15.794722 1018066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:21:15.808177 1018066 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:21:15.808262 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:21:15.824708 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:21:15.838604 1018066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:21:15.971215 1018066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:21:16.107064 1018066 docker.go:234] disabling docker service ...
	I1018 13:21:16.107142 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:21:16.123542 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:21:16.137503 1018066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:21:16.253413 1018066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:21:16.376695 1018066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:21:16.392492 1018066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:21:16.407694 1018066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 13:21:16.407815 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.417202 1018066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:21:16.417276 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.426389 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.443584 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.453905 1018066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:21:16.464399 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.474487 1018066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.484171 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.493371 1018066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:21:16.501356 1018066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:21:16.509189 1018066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:21:16.642287 1018066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:21:16.785280 1018066 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:21:16.785408 1018066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:21:16.790019 1018066 start.go:563] Will wait 60s for crictl version
	I1018 13:21:16.790140 1018066 ssh_runner.go:195] Run: which crictl
	I1018 13:21:16.794319 1018066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:21:16.821848 1018066 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:21:16.822017 1018066 ssh_runner.go:195] Run: crio --version
	I1018 13:21:16.854869 1018066 ssh_runner.go:195] Run: crio --version
	I1018 13:21:16.889254 1018066 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 13:21:16.892151 1018066 cli_runner.go:164] Run: docker network inspect old-k8s-version-460322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:21:16.908987 1018066 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:21:16.913028 1018066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:21:16.923539 1018066 kubeadm.go:883] updating cluster {Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:21:16.923773 1018066 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:21:16.923846 1018066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:21:16.959953 1018066 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:21:16.959979 1018066 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:21:16.960051 1018066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:21:16.990512 1018066 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:21:16.990537 1018066 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:21:16.990546 1018066 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1018 13:21:16.990651 1018066 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-460322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:21:16.990750 1018066 ssh_runner.go:195] Run: crio config
	I1018 13:21:17.065796 1018066 cni.go:84] Creating CNI manager for ""
	I1018 13:21:17.065819 1018066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:21:17.065866 1018066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:21:17.065898 1018066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-460322 NodeName:old-k8s-version-460322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:21:17.066064 1018066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-460322"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:21:17.066146 1018066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 13:21:17.074470 1018066 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:21:17.074547 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:21:17.082759 1018066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 13:21:17.096290 1018066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:21:17.111204 1018066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 13:21:17.126514 1018066 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:21:17.130486 1018066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:21:17.140768 1018066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:21:17.264444 1018066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:21:17.284228 1018066 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322 for IP: 192.168.85.2
	I1018 13:21:17.284250 1018066 certs.go:195] generating shared ca certs ...
	I1018 13:21:17.284266 1018066 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:17.284464 1018066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:21:17.284532 1018066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:21:17.284544 1018066 certs.go:257] generating profile certs ...
	I1018 13:21:17.284651 1018066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.key
	I1018 13:21:17.284745 1018066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key.449e5b3e
	I1018 13:21:17.284826 1018066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key
	I1018 13:21:17.284966 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:21:17.285024 1018066 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:21:17.285040 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:21:17.285067 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:21:17.285118 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:21:17.285150 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:21:17.285217 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:21:17.285898 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:21:17.306478 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:21:17.327474 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:21:17.347953 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:21:17.376947 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 13:21:17.398488 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:21:17.421581 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:21:17.449969 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:21:17.480466 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:21:17.506094 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:21:17.529270 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:21:17.560600 1018066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:21:17.576071 1018066 ssh_runner.go:195] Run: openssl version
	I1018 13:21:17.582598 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:21:17.591559 1018066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:21:17.595931 1018066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:21:17.596037 1018066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:21:17.639733 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:21:17.648242 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:21:17.656851 1018066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:21:17.660711 1018066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:21:17.660777 1018066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:21:17.702415 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:21:17.710626 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:21:17.718951 1018066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:21:17.722835 1018066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:21:17.722938 1018066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:21:17.765570 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:21:17.773505 1018066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:21:17.777478 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:21:17.819148 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:21:17.861379 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:21:17.904280 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:21:17.970964 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:21:18.043135 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:21:18.138372 1018066 kubeadm.go:400] StartCluster: {Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:21:18.138486 1018066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:21:18.138609 1018066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:21:18.193463 1018066 cri.go:89] found id: "9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d"
	I1018 13:21:18.193489 1018066 cri.go:89] found id: "9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4"
	I1018 13:21:18.193495 1018066 cri.go:89] found id: "ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc"
	I1018 13:21:18.193529 1018066 cri.go:89] found id: "263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8"
	I1018 13:21:18.193540 1018066 cri.go:89] found id: ""
	I1018 13:21:18.193611 1018066 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:21:18.213117 1018066 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:21:18Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:21:18.213222 1018066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:21:18.230230 1018066 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:21:18.230268 1018066 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:21:18.230354 1018066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:21:18.239644 1018066 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:21:18.240374 1018066 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-460322" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:21:18.240723 1018066 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-460322" cluster setting kubeconfig missing "old-k8s-version-460322" context setting]
	I1018 13:21:18.241264 1018066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:18.243207 1018066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:21:18.254651 1018066 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 13:21:18.254697 1018066 kubeadm.go:601] duration metric: took 24.422428ms to restartPrimaryControlPlane
	I1018 13:21:18.254731 1018066 kubeadm.go:402] duration metric: took 116.369302ms to StartCluster
	I1018 13:21:18.254749 1018066 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:18.254857 1018066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:21:18.255967 1018066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:18.256308 1018066 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:21:18.256527 1018066 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:21:18.256667 1018066 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:21:18.257028 1018066 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-460322"
	I1018 13:21:18.257046 1018066 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-460322"
	W1018 13:21:18.257053 1018066 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:21:18.257077 1018066 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:21:18.257564 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.257750 1018066 addons.go:69] Setting dashboard=true in profile "old-k8s-version-460322"
	I1018 13:21:18.257784 1018066 addons.go:238] Setting addon dashboard=true in "old-k8s-version-460322"
	W1018 13:21:18.257815 1018066 addons.go:247] addon dashboard should already be in state true
	I1018 13:21:18.257852 1018066 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:21:18.258131 1018066 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-460322"
	I1018 13:21:18.258147 1018066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-460322"
	I1018 13:21:18.258361 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.258654 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.263724 1018066 out.go:179] * Verifying Kubernetes components...
	I1018 13:21:18.266989 1018066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:21:18.295525 1018066 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-460322"
	W1018 13:21:18.295549 1018066 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:21:18.295573 1018066 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:21:18.295998 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.318907 1018066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:21:18.322062 1018066 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:21:18.322087 1018066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:21:18.322164 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:18.338569 1018066 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 13:21:18.343803 1018066 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 13:21:18.355730 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 13:21:18.355759 1018066 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 13:21:18.355831 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:18.357467 1018066 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:21:18.357489 1018066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:21:18.357545 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:18.390303 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:18.411810 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:18.421638 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:18.627878 1018066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:21:18.650831 1018066 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-460322" to be "Ready" ...
	I1018 13:21:18.680836 1018066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:21:18.697561 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 13:21:18.697636 1018066 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 13:21:18.727205 1018066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:21:18.769300 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 13:21:18.769323 1018066 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 13:21:18.850276 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:21:18.850356 1018066 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:21:18.897729 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:21:18.897802 1018066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:21:18.924292 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:21:18.924368 1018066 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:21:18.976795 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:21:18.976872 1018066 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:21:19.024927 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:21:19.025004 1018066 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:21:19.072298 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:21:19.072379 1018066 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:21:19.107867 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:21:19.107964 1018066 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:21:19.134708 1018066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:21:23.249019 1018066 node_ready.go:49] node "old-k8s-version-460322" is "Ready"
	I1018 13:21:23.249046 1018066 node_ready.go:38] duration metric: took 4.598174198s for node "old-k8s-version-460322" to be "Ready" ...
	I1018 13:21:23.249060 1018066 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:21:23.249120 1018066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:21:24.881142 1018066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.200215144s)
	I1018 13:21:24.881274 1018066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.154048105s)
	I1018 13:21:25.483931 1018066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.349114979s)
	I1018 13:21:25.483974 1018066 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.234836594s)
	I1018 13:21:25.484098 1018066 api_server.go:72] duration metric: took 7.227755279s to wait for apiserver process to appear ...
	I1018 13:21:25.484112 1018066 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:21:25.484131 1018066 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:21:25.487361 1018066 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-460322 addons enable metrics-server
	
	I1018 13:21:25.490604 1018066 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 13:21:25.494575 1018066 addons.go:514] duration metric: took 7.237897625s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 13:21:25.496551 1018066 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 13:21:25.498368 1018066 api_server.go:141] control plane version: v1.28.0
	I1018 13:21:25.498398 1018066 api_server.go:131] duration metric: took 14.278384ms to wait for apiserver health ...
	I1018 13:21:25.498408 1018066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:21:25.504973 1018066 system_pods.go:59] 8 kube-system pods found
	I1018 13:21:25.505015 1018066 system_pods.go:61] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:21:25.505026 1018066 system_pods.go:61] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:21:25.505034 1018066 system_pods.go:61] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:21:25.505047 1018066 system_pods.go:61] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:21:25.505062 1018066 system_pods.go:61] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:21:25.505074 1018066 system_pods.go:61] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:21:25.505085 1018066 system_pods.go:61] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:21:25.505101 1018066 system_pods.go:61] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Running
	I1018 13:21:25.505114 1018066 system_pods.go:74] duration metric: took 6.693469ms to wait for pod list to return data ...
	I1018 13:21:25.505127 1018066 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:21:25.514954 1018066 default_sa.go:45] found service account: "default"
	I1018 13:21:25.514985 1018066 default_sa.go:55] duration metric: took 9.851069ms for default service account to be created ...
	I1018 13:21:25.514995 1018066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:21:25.518993 1018066 system_pods.go:86] 8 kube-system pods found
	I1018 13:21:25.519028 1018066 system_pods.go:89] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:21:25.519040 1018066 system_pods.go:89] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:21:25.519046 1018066 system_pods.go:89] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:21:25.519054 1018066 system_pods.go:89] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:21:25.519061 1018066 system_pods.go:89] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:21:25.519072 1018066 system_pods.go:89] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:21:25.519079 1018066 system_pods.go:89] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:21:25.519092 1018066 system_pods.go:89] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Running
	I1018 13:21:25.519100 1018066 system_pods.go:126] duration metric: took 4.097934ms to wait for k8s-apps to be running ...
	I1018 13:21:25.519115 1018066 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:21:25.519178 1018066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:21:25.550435 1018066 system_svc.go:56] duration metric: took 31.312764ms WaitForService to wait for kubelet
	I1018 13:21:25.550526 1018066 kubeadm.go:586] duration metric: took 7.294165696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:21:25.550563 1018066 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:21:25.554228 1018066 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:21:25.554270 1018066 node_conditions.go:123] node cpu capacity is 2
	I1018 13:21:25.554282 1018066 node_conditions.go:105] duration metric: took 3.691727ms to run NodePressure ...
	I1018 13:21:25.554296 1018066 start.go:241] waiting for startup goroutines ...
	I1018 13:21:25.554304 1018066 start.go:246] waiting for cluster config update ...
	I1018 13:21:25.554320 1018066 start.go:255] writing updated cluster config ...
	I1018 13:21:25.554638 1018066 ssh_runner.go:195] Run: rm -f paused
	I1018 13:21:25.559405 1018066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:21:25.564469 1018066 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lqv5k" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 13:21:27.570339 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:29.570611 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:32.071286 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:34.570374 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:36.571636 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:38.571809 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:41.086085 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:43.571569 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:45.572920 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:47.576955 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:50.072308 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:52.573411 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:55.071244 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:57.071910 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	I1018 13:21:58.070924 1018066 pod_ready.go:94] pod "coredns-5dd5756b68-lqv5k" is "Ready"
	I1018 13:21:58.070956 1018066 pod_ready.go:86] duration metric: took 32.506456695s for pod "coredns-5dd5756b68-lqv5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.075904 1018066 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.084449 1018066 pod_ready.go:94] pod "etcd-old-k8s-version-460322" is "Ready"
	I1018 13:21:58.084484 1018066 pod_ready.go:86] duration metric: took 8.546442ms for pod "etcd-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.090531 1018066 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.106174 1018066 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-460322" is "Ready"
	I1018 13:21:58.106257 1018066 pod_ready.go:86] duration metric: took 15.644239ms for pod "kube-apiserver-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.114146 1018066 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.269070 1018066 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-460322" is "Ready"
	I1018 13:21:58.269100 1018066 pod_ready.go:86] duration metric: took 154.876268ms for pod "kube-controller-manager-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.469207 1018066 pod_ready.go:83] waiting for pod "kube-proxy-r24jz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.868813 1018066 pod_ready.go:94] pod "kube-proxy-r24jz" is "Ready"
	I1018 13:21:58.868841 1018066 pod_ready.go:86] duration metric: took 399.609596ms for pod "kube-proxy-r24jz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:59.068988 1018066 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:59.468853 1018066 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-460322" is "Ready"
	I1018 13:21:59.468884 1018066 pod_ready.go:86] duration metric: took 399.870095ms for pod "kube-scheduler-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:59.468897 1018066 pod_ready.go:40] duration metric: took 33.909455259s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:21:59.524954 1018066 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 13:21:59.528285 1018066 out.go:203] 
	W1018 13:21:59.531249 1018066 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 13:21:59.534064 1018066 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 13:21:59.537045 1018066 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-460322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.108034141Z" level=info msg="Created container fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w/dashboard-metrics-scraper" id=de2c2cc3-a7d2-4244-91ac-89d536de8bef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.110439128Z" level=info msg="Starting container: fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6" id=7a0188fa-76c4-4de2-af0b-c5a22a210147 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.113802506Z" level=info msg="Started container" PID=1637 containerID=fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w/dashboard-metrics-scraper id=7a0188fa-76c4-4de2-af0b-c5a22a210147 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072
	Oct 18 13:21:56 old-k8s-version-460322 conmon[1635]: conmon fddc01980ddd0742411f <ninfo>: container 1637 exited with status 1
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.688001274Z" level=info msg="Removing container: 02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83" id=b53596ae-7c19-4a58-9934-828b6f6b8ebf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.695938569Z" level=info msg="Error loading conmon cgroup of container 02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83: cgroup deleted" id=b53596ae-7c19-4a58-9934-828b6f6b8ebf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.702537646Z" level=info msg="Removed container 02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w/dashboard-metrics-scraper" id=b53596ae-7c19-4a58-9934-828b6f6b8ebf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.262058681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.268182676Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.268231415Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.268258976Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.271724059Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.271771165Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.271794016Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.275129686Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.275168136Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.275191299Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.278555293Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.278591831Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.278614674Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.28236483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.282402992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.282428765Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.28588364Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.285920563Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	fddc01980ddd0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   d8707686676c6       dashboard-metrics-scraper-5f989dc9cf-xgv8w       kubernetes-dashboard
	984906cf5e533       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   dc83761a69835       storage-provisioner                              kube-system
	25cfc40476d08       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   90cabe408fb59       kubernetes-dashboard-8694d4445c-sxt4n            kubernetes-dashboard
	6733299c34fd3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   c3c8e250f66ee       coredns-5dd5756b68-lqv5k                         kube-system
	032f209c6105e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   1d4dbb929fd4c       busybox                                          default
	6bc8a18120646       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   35b973d294fc6       kube-proxy-r24jz                                 kube-system
	64aa55f28d941       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   dc83761a69835       storage-provisioner                              kube-system
	326284bdad41b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   08dac1d743165       kindnet-q2sfv                                    kube-system
	9d31a92b9b427       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           56 seconds ago      Running             kube-apiserver              1                   749662b674112       kube-apiserver-old-k8s-version-460322            kube-system
	9dfa74e0f8e96       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           56 seconds ago      Running             kube-scheduler              1                   3d045ec2ed442       kube-scheduler-old-k8s-version-460322            kube-system
	ec327421c09b3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           56 seconds ago      Running             etcd                        1                   6ba364d2b86a7       etcd-old-k8s-version-460322                      kube-system
	263befedb5a5d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           56 seconds ago      Running             kube-controller-manager     1                   9928531d61493       kube-controller-manager-old-k8s-version-460322   kube-system
	
	
	==> coredns [6733299c34fd341f383ae390c143b2befff44dd81eefe87b85616a104cb5f5b6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35522 - 7450 "HINFO IN 7711608374269385620.3571844459731890540. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005223089s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-460322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-460322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=old-k8s-version-460322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_20_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:20:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-460322
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:22:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-460322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                08120b82-a464-4f81-9944-a22a9025117c
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-lqv5k                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-460322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-q2sfv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-460322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-460322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-r24jz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-460322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-xgv8w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-sxt4n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-460322 event: Registered Node old-k8s-version-460322 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-460322 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-460322 event: Registered Node old-k8s-version-460322 in Controller
	
	
	==> dmesg <==
	[Oct18 12:53] overlayfs: idmapped layers are currently not supported
	[Oct18 12:57] overlayfs: idmapped layers are currently not supported
	[Oct18 12:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc] <==
	{"level":"info","ts":"2025-10-18T13:21:18.544485Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T13:21:18.544493Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T13:21:18.554197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-18T13:21:18.554371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-18T13:21:18.554502Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:21:18.554531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:21:18.632838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T13:21:18.632987Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T13:21:18.63307Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T13:21:18.63511Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T13:21:18.635193Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T13:21:20.192884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T13:21:20.192999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T13:21:20.19305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-18T13:21:20.193089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.193121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.193156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.193187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.197391Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-460322 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T13:21:20.197494Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T13:21:20.198499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T13:21:20.200157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T13:21:20.201086Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-18T13:21:20.220149Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T13:21:20.220246Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:22:14 up  5:04,  0 user,  load average: 2.35, 2.91, 2.35
	Linux old-k8s-version-460322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [326284bdad41b74cf178475229d927879679dce262e83729e460ce45b0997281] <==
	I1018 13:21:24.013694       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:21:24.014431       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:21:24.014594       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:21:24.014608       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:21:24.014621       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:21:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:21:24.259217       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:21:24.259238       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:21:24.259246       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:21:24.309737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:21:54.259953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:21:54.259953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:21:54.260188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:21:54.310602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:21:55.760034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:21:55.760063       1 metrics.go:72] Registering metrics
	I1018 13:21:55.760127       1 controller.go:711] "Syncing nftables rules"
	I1018 13:22:04.261176       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:22:04.261232       1 main.go:301] handling current node
	I1018 13:22:14.263828       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:22:14.263878       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d] <==
	I1018 13:21:23.271285       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 13:21:23.272189       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 13:21:23.288796       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 13:21:23.288885       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 13:21:23.289353       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 13:21:23.291183       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 13:21:23.295086       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:21:23.335155       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 13:21:23.358650       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 13:21:23.359912       1 aggregator.go:166] initial CRD sync complete...
	I1018 13:21:23.359938       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 13:21:23.359945       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:21:23.359951       1 cache.go:39] Caches are synced for autoregister controller
	E1018 13:21:23.373995       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:21:23.960362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:21:25.246519       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 13:21:25.316929       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 13:21:25.353853       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:21:25.367103       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:21:25.377046       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 13:21:25.455521       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.114.193"}
	I1018 13:21:25.475562       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.32.233"}
	I1018 13:21:35.670580       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 13:21:35.817844       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:21:35.861228       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8] <==
	I1018 13:21:35.779010       1 range_allocator.go:174] "Sending events to api server"
	I1018 13:21:35.779040       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1018 13:21:35.779065       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1018 13:21:35.779071       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1018 13:21:35.781053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.518148ms"
	I1018 13:21:35.808278       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 13:21:35.825397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.982674ms"
	I1018 13:21:35.838711       1 shared_informer.go:318] Caches are synced for endpoint
	I1018 13:21:35.838926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.787052ms"
	I1018 13:21:35.839040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.12µs"
	I1018 13:21:35.852380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.852687ms"
	I1018 13:21:35.852519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.099µs"
	I1018 13:21:35.890253       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 13:21:36.189945       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 13:21:36.189977       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 13:21:36.253377       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 13:21:41.673176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.001697ms"
	I1018 13:21:41.673987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.799µs"
	I1018 13:21:45.677490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.311µs"
	I1018 13:21:46.680744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.495µs"
	I1018 13:21:47.673316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.938µs"
	I1018 13:21:56.712611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.391µs"
	I1018 13:21:57.725209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.07373ms"
	I1018 13:21:57.725397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.918µs"
	I1018 13:22:06.092928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.6µs"
	
	
	==> kube-proxy [6bc8a1812064618e157047d140bb8c58f735c688349bfaef61844d1c8c1772e9] <==
	I1018 13:21:24.126920       1 server_others.go:69] "Using iptables proxy"
	I1018 13:21:24.230136       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1018 13:21:24.554878       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:21:24.586124       1 server_others.go:152] "Using iptables Proxier"
	I1018 13:21:24.586176       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 13:21:24.586191       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 13:21:24.586216       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 13:21:24.586458       1 server.go:846] "Version info" version="v1.28.0"
	I1018 13:21:24.586469       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:21:24.590411       1 config.go:188] "Starting service config controller"
	I1018 13:21:24.590433       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 13:21:24.590490       1 config.go:97] "Starting endpoint slice config controller"
	I1018 13:21:24.590496       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 13:21:24.590848       1 config.go:315] "Starting node config controller"
	I1018 13:21:24.590854       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 13:21:24.692526       1 shared_informer.go:318] Caches are synced for node config
	I1018 13:21:24.692568       1 shared_informer.go:318] Caches are synced for service config
	I1018 13:21:24.692603       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4] <==
	I1018 13:21:20.043341       1 serving.go:348] Generated self-signed cert in-memory
	W1018 13:21:22.984254       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:21:22.984291       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:21:22.984302       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:21:22.984308       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:21:23.286270       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 13:21:23.286389       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:21:23.292887       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 13:21:23.298891       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:21:23.299086       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 13:21:23.298906       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 13:21:23.399188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.764589     778 topology_manager.go:215] "Topology Admit Handler" podUID="ee1a1889-ff95-440a-b07e-321beed40111" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-sxt4n"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.775487     778 topology_manager.go:215] "Topology Admit Handler" podUID="61691c8e-05ef-4921-9da8-20bc20887783" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-xgv8w"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845558     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlw89\" (UniqueName: \"kubernetes.io/projected/61691c8e-05ef-4921-9da8-20bc20887783-kube-api-access-vlw89\") pod \"dashboard-metrics-scraper-5f989dc9cf-xgv8w\" (UID: \"61691c8e-05ef-4921-9da8-20bc20887783\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845646     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s2j\" (UniqueName: \"kubernetes.io/projected/ee1a1889-ff95-440a-b07e-321beed40111-kube-api-access-w4s2j\") pod \"kubernetes-dashboard-8694d4445c-sxt4n\" (UID: \"ee1a1889-ff95-440a-b07e-321beed40111\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sxt4n"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845787     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/61691c8e-05ef-4921-9da8-20bc20887783-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-xgv8w\" (UID: \"61691c8e-05ef-4921-9da8-20bc20887783\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845868     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ee1a1889-ff95-440a-b07e-321beed40111-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-sxt4n\" (UID: \"ee1a1889-ff95-440a-b07e-321beed40111\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sxt4n"
	Oct 18 13:21:36 old-k8s-version-460322 kubelet[778]: W1018 13:21:36.126600     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072 WatchSource:0}: Error finding container d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072: Status 404 returned error can't find the container with id d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072
	Oct 18 13:21:45 old-k8s-version-460322 kubelet[778]: I1018 13:21:45.653216     778 scope.go:117] "RemoveContainer" containerID="c64fa84199249d5f132d8ccba5e90d1ad91c8fc3967c44d08cd6d55af67b6cbc"
	Oct 18 13:21:45 old-k8s-version-460322 kubelet[778]: I1018 13:21:45.675047     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sxt4n" podStartSLOduration=5.764225119 podCreationTimestamp="2025-10-18 13:21:35 +0000 UTC" firstStartedPulling="2025-10-18 13:21:36.119429117 +0000 UTC m=+18.836373222" lastFinishedPulling="2025-10-18 13:21:41.029517157 +0000 UTC m=+23.746461263" observedRunningTime="2025-10-18 13:21:41.658467442 +0000 UTC m=+24.375411564" watchObservedRunningTime="2025-10-18 13:21:45.67431316 +0000 UTC m=+28.391257274"
	Oct 18 13:21:46 old-k8s-version-460322 kubelet[778]: I1018 13:21:46.655875     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:46 old-k8s-version-460322 kubelet[778]: I1018 13:21:46.656286     778 scope.go:117] "RemoveContainer" containerID="c64fa84199249d5f132d8ccba5e90d1ad91c8fc3967c44d08cd6d55af67b6cbc"
	Oct 18 13:21:46 old-k8s-version-460322 kubelet[778]: E1018 13:21:46.656867     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:21:47 old-k8s-version-460322 kubelet[778]: I1018 13:21:47.659097     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:47 old-k8s-version-460322 kubelet[778]: E1018 13:21:47.659540     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:21:54 old-k8s-version-460322 kubelet[778]: I1018 13:21:54.676469     778 scope.go:117] "RemoveContainer" containerID="64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: I1018 13:21:56.078506     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: I1018 13:21:56.685234     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: I1018 13:21:56.685512     778 scope.go:117] "RemoveContainer" containerID="fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: E1018 13:21:56.685903     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:22:06 old-k8s-version-460322 kubelet[778]: I1018 13:22:06.078783     778 scope.go:117] "RemoveContainer" containerID="fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	Oct 18 13:22:06 old-k8s-version-460322 kubelet[778]: E1018 13:22:06.079630     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:22:11 old-k8s-version-460322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:22:11 old-k8s-version-460322 kubelet[778]: I1018 13:22:11.839118     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 13:22:11 old-k8s-version-460322 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:22:11 old-k8s-version-460322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [25cfc40476d08f879ec09d886ec981c65e17c36cf0044db936682dfbd1c11cf4] <==
	2025/10/18 13:21:41 Using namespace: kubernetes-dashboard
	2025/10/18 13:21:41 Using in-cluster config to connect to apiserver
	2025/10/18 13:21:41 Using secret token for csrf signing
	2025/10/18 13:21:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:21:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:21:41 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 13:21:41 Generating JWE encryption key
	2025/10/18 13:21:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:21:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:21:41 Initializing JWE encryption key from synchronized object
	2025/10/18 13:21:41 Creating in-cluster Sidecar client
	2025/10/18 13:21:41 Serving insecurely on HTTP port: 9090
	2025/10/18 13:21:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:22:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:21:41 Starting overwatch
	
	
	==> storage-provisioner [64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8] <==
	I1018 13:21:23.975223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:21:53.987044       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [984906cf5e5334c75f8c765a6f2db0d15bb3c67c8dd26c2ea22afe57e46c2ccd] <==
	I1018 13:21:54.728859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:21:54.742145       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:21:54.742265       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 13:22:12.140528       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:22:12.140935       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4894fd2-3668-4ade-932b-17a0a4c87466", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-460322_ba0c30cf-f84d-4c81-8eff-31aee766928b became leader
	I1018 13:22:12.141017       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460322_ba0c30cf-f84d-4c81-8eff-31aee766928b!
	I1018 13:22:12.241475       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460322_ba0c30cf-f84d-4c81-8eff-31aee766928b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460322 -n old-k8s-version-460322
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460322 -n old-k8s-version-460322: exit status 2 (379.232212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-460322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-460322
helpers_test.go:243: (dbg) docker inspect old-k8s-version-460322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884",
	        "Created": "2025-10-18T13:19:47.412981498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1018194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:21:10.192427426Z",
	            "FinishedAt": "2025-10-18T13:21:09.364907518Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/hostname",
	        "HostsPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/hosts",
	        "LogPath": "/var/lib/docker/containers/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884-json.log",
	        "Name": "/old-k8s-version-460322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-460322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-460322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884",
	                "LowerDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad28395248e6366eb1494ce77852ebc7198807bd4d79eb845c9461024d5ea0dd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-460322",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-460322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-460322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-460322",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-460322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da3988b07bbb76e5d947cab83b56a67512e1923af2e5cf3bd06086ecdec25943",
	            "SandboxKey": "/var/run/docker/netns/da3988b07bbb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34164"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-460322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:8a:a6:f2:be:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b3865b19e7ef0c5515b69409531de50dd7d3b36c97ad0e3b63e293f7d29b30d",
	                    "EndpointID": "ee36fd7e068587256fa72069cd1fd6d42f2e9377f3e4a2a2478ffb68fceb7149",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-460322",
	                        "a47757ca4663"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322: exit status 2 (366.289898ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-460322 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-460322 logs -n 25: (1.333202401s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-633218 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo containerd config dump                                                                                                                                                                                                  │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo crio config                                                                                                                                                                                                             │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ delete  │ -p cilium-633218                                                                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p force-systemd-env-914730 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ force-systemd-flag-882807 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ delete  │ -p force-systemd-flag-882807                                                                                                                                                                                                                  │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-076887    │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p force-systemd-env-914730                                                                                                                                                                                                                   │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ cert-options-179041 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ -p cert-options-179041 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	│ stop    │ -p old-k8s-version-460322 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │ 18 Oct 25 13:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:21:09
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:21:09.899883 1018066 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:21:09.900046 1018066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:21:09.900052 1018066 out.go:374] Setting ErrFile to fd 2...
	I1018 13:21:09.900084 1018066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:21:09.901020 1018066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:21:09.901710 1018066 out.go:368] Setting JSON to false
	I1018 13:21:09.902757 1018066 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18222,"bootTime":1760775448,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:21:09.902872 1018066 start.go:141] virtualization:  
	I1018 13:21:09.906044 1018066 out.go:179] * [old-k8s-version-460322] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:21:09.909948 1018066 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:21:09.910074 1018066 notify.go:220] Checking for updates...
	I1018 13:21:09.916306 1018066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:21:09.919350 1018066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:21:09.922573 1018066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:21:09.925534 1018066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:21:09.928510 1018066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:21:09.932003 1018066 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:21:09.935530 1018066 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 13:21:09.938469 1018066 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:21:09.965356 1018066 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:21:09.965498 1018066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:21:10.036287 1018066 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:21:10.022333322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:21:10.036418 1018066 docker.go:318] overlay module found
	I1018 13:21:10.039799 1018066 out.go:179] * Using the docker driver based on existing profile
	I1018 13:21:10.042765 1018066 start.go:305] selected driver: docker
	I1018 13:21:10.042807 1018066 start.go:925] validating driver "docker" against &{Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:21:10.042931 1018066 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:21:10.043770 1018066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:21:10.103158 1018066 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:21:10.092541673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:21:10.103526 1018066 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:21:10.103572 1018066 cni.go:84] Creating CNI manager for ""
	I1018 13:21:10.103640 1018066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:21:10.103920 1018066 start.go:349] cluster config:
	{Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:21:10.107155 1018066 out.go:179] * Starting "old-k8s-version-460322" primary control-plane node in "old-k8s-version-460322" cluster
	I1018 13:21:10.109979 1018066 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:21:10.112909 1018066 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:21:10.115825 1018066 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:21:10.115884 1018066 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 13:21:10.115897 1018066 cache.go:58] Caching tarball of preloaded images
	I1018 13:21:10.115907 1018066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:21:10.115991 1018066 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:21:10.116002 1018066 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 13:21:10.116127 1018066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json ...
	I1018 13:21:10.135899 1018066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:21:10.135922 1018066 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:21:10.135940 1018066 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:21:10.135971 1018066 start.go:360] acquireMachinesLock for old-k8s-version-460322: {Name:mk920abd4332d87bf804859db37de89666f5b2f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:21:10.136056 1018066 start.go:364] duration metric: took 62.007µs to acquireMachinesLock for "old-k8s-version-460322"
	I1018 13:21:10.136084 1018066 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:21:10.136092 1018066 fix.go:54] fixHost starting: 
	I1018 13:21:10.136355 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:10.153961 1018066 fix.go:112] recreateIfNeeded on old-k8s-version-460322: state=Stopped err=<nil>
	W1018 13:21:10.153994 1018066 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:21:10.157375 1018066 out.go:252] * Restarting existing docker container for "old-k8s-version-460322" ...
	I1018 13:21:10.157464 1018066 cli_runner.go:164] Run: docker start old-k8s-version-460322
	I1018 13:21:10.430580 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:10.452488 1018066 kic.go:430] container "old-k8s-version-460322" state is running.
	I1018 13:21:10.452891 1018066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:21:10.478780 1018066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/config.json ...
	I1018 13:21:10.479029 1018066 machine.go:93] provisionDockerMachine start ...
	I1018 13:21:10.479100 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:10.507622 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:10.508177 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:10.508192 1018066 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:21:10.508906 1018066 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:21:13.659338 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460322
	
	I1018 13:21:13.659365 1018066 ubuntu.go:182] provisioning hostname "old-k8s-version-460322"
	I1018 13:21:13.659433 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:13.678824 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:13.679148 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:13.679167 1018066 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-460322 && echo "old-k8s-version-460322" | sudo tee /etc/hostname
	I1018 13:21:13.842558 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460322
	
	I1018 13:21:13.842665 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:13.862318 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:13.862672 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:13.862695 1018066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-460322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-460322/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-460322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:21:14.016420 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:21:14.016448 1018066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:21:14.016480 1018066 ubuntu.go:190] setting up certificates
	I1018 13:21:14.016489 1018066 provision.go:84] configureAuth start
	I1018 13:21:14.016552 1018066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:21:14.035132 1018066 provision.go:143] copyHostCerts
	I1018 13:21:14.035223 1018066 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:21:14.035244 1018066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:21:14.035329 1018066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:21:14.035430 1018066 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:21:14.035436 1018066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:21:14.035461 1018066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:21:14.035511 1018066 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:21:14.035516 1018066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:21:14.035537 1018066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:21:14.035580 1018066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-460322 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-460322]
	I1018 13:21:14.640879 1018066 provision.go:177] copyRemoteCerts
	I1018 13:21:14.640997 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:21:14.641065 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:14.661783 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:14.767518 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 13:21:14.784825 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 13:21:14.802121 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:21:14.820158 1018066 provision.go:87] duration metric: took 803.654722ms to configureAuth
	I1018 13:21:14.820184 1018066 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:21:14.820381 1018066 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:21:14.820491 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:14.837689 1018066 main.go:141] libmachine: Using SSH client type: native
	I1018 13:21:14.838010 1018066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34162 <nil> <nil>}
	I1018 13:21:14.838033 1018066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:21:15.185806 1018066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:21:15.185831 1018066 machine.go:96] duration metric: took 4.706784103s to provisionDockerMachine
	I1018 13:21:15.185842 1018066 start.go:293] postStartSetup for "old-k8s-version-460322" (driver="docker")
	I1018 13:21:15.185853 1018066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:21:15.185931 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:21:15.185983 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.208803 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.322201 1018066 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:21:15.326076 1018066 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:21:15.326106 1018066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:21:15.326122 1018066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:21:15.326181 1018066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:21:15.326268 1018066 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:21:15.326391 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:21:15.334150 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:21:15.352783 1018066 start.go:296] duration metric: took 166.926321ms for postStartSetup
	I1018 13:21:15.352875 1018066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:21:15.352926 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.370988 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.472787 1018066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:21:15.477717 1018066 fix.go:56] duration metric: took 5.341617158s for fixHost
	I1018 13:21:15.477745 1018066 start.go:83] releasing machines lock for "old-k8s-version-460322", held for 5.341674923s
	I1018 13:21:15.477817 1018066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460322
	I1018 13:21:15.494389 1018066 ssh_runner.go:195] Run: cat /version.json
	I1018 13:21:15.494456 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.494710 1018066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:21:15.494780 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:15.520069 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.523994 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:15.720747 1018066 ssh_runner.go:195] Run: systemctl --version
	I1018 13:21:15.727581 1018066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:21:15.765130 1018066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:21:15.769580 1018066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:21:15.769655 1018066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:21:15.777985 1018066 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:21:15.778013 1018066 start.go:495] detecting cgroup driver to use...
	I1018 13:21:15.778049 1018066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:21:15.778104 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:21:15.794722 1018066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:21:15.808177 1018066 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:21:15.808262 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:21:15.824708 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:21:15.838604 1018066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:21:15.971215 1018066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:21:16.107064 1018066 docker.go:234] disabling docker service ...
	I1018 13:21:16.107142 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:21:16.123542 1018066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:21:16.137503 1018066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:21:16.253413 1018066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:21:16.376695 1018066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:21:16.392492 1018066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:21:16.407694 1018066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 13:21:16.407815 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.417202 1018066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:21:16.417276 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.426389 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.443584 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.453905 1018066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:21:16.464399 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.474487 1018066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.484171 1018066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:21:16.493371 1018066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:21:16.501356 1018066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:21:16.509189 1018066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:21:16.642287 1018066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:21:16.785280 1018066 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:21:16.785408 1018066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:21:16.790019 1018066 start.go:563] Will wait 60s for crictl version
	I1018 13:21:16.790140 1018066 ssh_runner.go:195] Run: which crictl
	I1018 13:21:16.794319 1018066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:21:16.821848 1018066 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:21:16.822017 1018066 ssh_runner.go:195] Run: crio --version
	I1018 13:21:16.854869 1018066 ssh_runner.go:195] Run: crio --version
	I1018 13:21:16.889254 1018066 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 13:21:16.892151 1018066 cli_runner.go:164] Run: docker network inspect old-k8s-version-460322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:21:16.908987 1018066 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:21:16.913028 1018066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:21:16.923539 1018066 kubeadm.go:883] updating cluster {Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:21:16.923773 1018066 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 13:21:16.923846 1018066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:21:16.959953 1018066 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:21:16.959979 1018066 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:21:16.960051 1018066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:21:16.990512 1018066 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:21:16.990537 1018066 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:21:16.990546 1018066 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1018 13:21:16.990651 1018066 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-460322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:21:16.990750 1018066 ssh_runner.go:195] Run: crio config
	I1018 13:21:17.065796 1018066 cni.go:84] Creating CNI manager for ""
	I1018 13:21:17.065819 1018066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:21:17.065866 1018066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:21:17.065898 1018066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-460322 NodeName:old-k8s-version-460322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:21:17.066064 1018066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-460322"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:21:17.066146 1018066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 13:21:17.074470 1018066 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:21:17.074547 1018066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:21:17.082759 1018066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 13:21:17.096290 1018066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:21:17.111204 1018066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 13:21:17.126514 1018066 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:21:17.130486 1018066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:21:17.140768 1018066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:21:17.264444 1018066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:21:17.284228 1018066 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322 for IP: 192.168.85.2
	I1018 13:21:17.284250 1018066 certs.go:195] generating shared ca certs ...
	I1018 13:21:17.284266 1018066 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:17.284464 1018066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:21:17.284532 1018066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:21:17.284544 1018066 certs.go:257] generating profile certs ...
	I1018 13:21:17.284651 1018066 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.key
	I1018 13:21:17.284745 1018066 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key.449e5b3e
	I1018 13:21:17.284826 1018066 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key
	I1018 13:21:17.284966 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:21:17.285024 1018066 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:21:17.285040 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:21:17.285067 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:21:17.285118 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:21:17.285150 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:21:17.285217 1018066 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:21:17.285898 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:21:17.306478 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:21:17.327474 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:21:17.347953 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:21:17.376947 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 13:21:17.398488 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:21:17.421581 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:21:17.449969 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:21:17.480466 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:21:17.506094 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:21:17.529270 1018066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:21:17.560600 1018066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:21:17.576071 1018066 ssh_runner.go:195] Run: openssl version
	I1018 13:21:17.582598 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:21:17.591559 1018066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:21:17.595931 1018066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:21:17.596037 1018066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:21:17.639733 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:21:17.648242 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:21:17.656851 1018066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:21:17.660711 1018066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:21:17.660777 1018066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:21:17.702415 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:21:17.710626 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:21:17.718951 1018066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:21:17.722835 1018066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:21:17.722938 1018066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:21:17.765570 1018066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:21:17.773505 1018066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:21:17.777478 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:21:17.819148 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:21:17.861379 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:21:17.904280 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:21:17.970964 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:21:18.043135 1018066 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:21:18.138372 1018066 kubeadm.go:400] StartCluster: {Name:old-k8s-version-460322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-460322 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:21:18.138486 1018066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:21:18.138609 1018066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:21:18.193463 1018066 cri.go:89] found id: "9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d"
	I1018 13:21:18.193489 1018066 cri.go:89] found id: "9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4"
	I1018 13:21:18.193495 1018066 cri.go:89] found id: "ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc"
	I1018 13:21:18.193529 1018066 cri.go:89] found id: "263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8"
	I1018 13:21:18.193540 1018066 cri.go:89] found id: ""
	I1018 13:21:18.193611 1018066 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:21:18.213117 1018066 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:21:18Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:21:18.213222 1018066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:21:18.230230 1018066 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:21:18.230268 1018066 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:21:18.230354 1018066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:21:18.239644 1018066 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:21:18.240374 1018066 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-460322" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:21:18.240723 1018066 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-460322" cluster setting kubeconfig missing "old-k8s-version-460322" context setting]
	I1018 13:21:18.241264 1018066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:18.243207 1018066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:21:18.254651 1018066 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 13:21:18.254697 1018066 kubeadm.go:601] duration metric: took 24.422428ms to restartPrimaryControlPlane
	I1018 13:21:18.254731 1018066 kubeadm.go:402] duration metric: took 116.369302ms to StartCluster
	I1018 13:21:18.254749 1018066 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:18.254857 1018066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:21:18.255967 1018066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:21:18.256308 1018066 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:21:18.256527 1018066 config.go:182] Loaded profile config "old-k8s-version-460322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 13:21:18.256667 1018066 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:21:18.257028 1018066 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-460322"
	I1018 13:21:18.257046 1018066 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-460322"
	W1018 13:21:18.257053 1018066 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:21:18.257077 1018066 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:21:18.257564 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.257750 1018066 addons.go:69] Setting dashboard=true in profile "old-k8s-version-460322"
	I1018 13:21:18.257784 1018066 addons.go:238] Setting addon dashboard=true in "old-k8s-version-460322"
	W1018 13:21:18.257815 1018066 addons.go:247] addon dashboard should already be in state true
	I1018 13:21:18.257852 1018066 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:21:18.258131 1018066 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-460322"
	I1018 13:21:18.258147 1018066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-460322"
	I1018 13:21:18.258361 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.258654 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.263724 1018066 out.go:179] * Verifying Kubernetes components...
	I1018 13:21:18.266989 1018066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:21:18.295525 1018066 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-460322"
	W1018 13:21:18.295549 1018066 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:21:18.295573 1018066 host.go:66] Checking if "old-k8s-version-460322" exists ...
	I1018 13:21:18.295998 1018066 cli_runner.go:164] Run: docker container inspect old-k8s-version-460322 --format={{.State.Status}}
	I1018 13:21:18.318907 1018066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:21:18.322062 1018066 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:21:18.322087 1018066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:21:18.322164 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:18.338569 1018066 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 13:21:18.343803 1018066 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 13:21:18.355730 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 13:21:18.355759 1018066 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 13:21:18.355831 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:18.357467 1018066 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:21:18.357489 1018066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:21:18.357545 1018066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460322
	I1018 13:21:18.390303 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:18.411810 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:18.421638 1018066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34162 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/old-k8s-version-460322/id_rsa Username:docker}
	I1018 13:21:18.627878 1018066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:21:18.650831 1018066 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-460322" to be "Ready" ...
	I1018 13:21:18.680836 1018066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:21:18.697561 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 13:21:18.697636 1018066 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 13:21:18.727205 1018066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:21:18.769300 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 13:21:18.769323 1018066 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 13:21:18.850276 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:21:18.850356 1018066 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:21:18.897729 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:21:18.897802 1018066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:21:18.924292 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:21:18.924368 1018066 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:21:18.976795 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:21:18.976872 1018066 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:21:19.024927 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:21:19.025004 1018066 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:21:19.072298 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:21:19.072379 1018066 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:21:19.107867 1018066 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:21:19.107964 1018066 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:21:19.134708 1018066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:21:23.249019 1018066 node_ready.go:49] node "old-k8s-version-460322" is "Ready"
	I1018 13:21:23.249046 1018066 node_ready.go:38] duration metric: took 4.598174198s for node "old-k8s-version-460322" to be "Ready" ...
	I1018 13:21:23.249060 1018066 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:21:23.249120 1018066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:21:24.881142 1018066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.200215144s)
	I1018 13:21:24.881274 1018066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.154048105s)
	I1018 13:21:25.483931 1018066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.349114979s)
	I1018 13:21:25.483974 1018066 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.234836594s)
	I1018 13:21:25.484098 1018066 api_server.go:72] duration metric: took 7.227755279s to wait for apiserver process to appear ...
	I1018 13:21:25.484112 1018066 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:21:25.484131 1018066 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:21:25.487361 1018066 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-460322 addons enable metrics-server
	
	I1018 13:21:25.490604 1018066 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 13:21:25.494575 1018066 addons.go:514] duration metric: took 7.237897625s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 13:21:25.496551 1018066 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 13:21:25.498368 1018066 api_server.go:141] control plane version: v1.28.0
	I1018 13:21:25.498398 1018066 api_server.go:131] duration metric: took 14.278384ms to wait for apiserver health ...
	I1018 13:21:25.498408 1018066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:21:25.504973 1018066 system_pods.go:59] 8 kube-system pods found
	I1018 13:21:25.505015 1018066 system_pods.go:61] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:21:25.505026 1018066 system_pods.go:61] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:21:25.505034 1018066 system_pods.go:61] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:21:25.505047 1018066 system_pods.go:61] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:21:25.505062 1018066 system_pods.go:61] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:21:25.505074 1018066 system_pods.go:61] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:21:25.505085 1018066 system_pods.go:61] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:21:25.505101 1018066 system_pods.go:61] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Running
	I1018 13:21:25.505114 1018066 system_pods.go:74] duration metric: took 6.693469ms to wait for pod list to return data ...
	I1018 13:21:25.505127 1018066 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:21:25.514954 1018066 default_sa.go:45] found service account: "default"
	I1018 13:21:25.514985 1018066 default_sa.go:55] duration metric: took 9.851069ms for default service account to be created ...
	I1018 13:21:25.514995 1018066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:21:25.518993 1018066 system_pods.go:86] 8 kube-system pods found
	I1018 13:21:25.519028 1018066 system_pods.go:89] "coredns-5dd5756b68-lqv5k" [2ca5efdc-f3fd-488a-90ee-6a4229383c66] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:21:25.519040 1018066 system_pods.go:89] "etcd-old-k8s-version-460322" [d46b2f32-3a4d-44d4-a126-fb038614bd8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:21:25.519046 1018066 system_pods.go:89] "kindnet-q2sfv" [e3c6220d-2780-43fd-9d48-417fd46db4c7] Running
	I1018 13:21:25.519054 1018066 system_pods.go:89] "kube-apiserver-old-k8s-version-460322" [96fae396-77cc-4d9c-84e4-a41c4aed73b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:21:25.519061 1018066 system_pods.go:89] "kube-controller-manager-old-k8s-version-460322" [da26c2a0-6c0b-48a9-8903-c5ae62fd9d03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:21:25.519072 1018066 system_pods.go:89] "kube-proxy-r24jz" [72b7b247-6c77-4feb-8734-a6cf94450421] Running
	I1018 13:21:25.519079 1018066 system_pods.go:89] "kube-scheduler-old-k8s-version-460322" [792baf8d-119e-4eea-ad34-41aac735b84b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:21:25.519092 1018066 system_pods.go:89] "storage-provisioner" [cf300c58-b4a5-43da-aaa9-2b0002ba3f8d] Running
	I1018 13:21:25.519100 1018066 system_pods.go:126] duration metric: took 4.097934ms to wait for k8s-apps to be running ...
	I1018 13:21:25.519115 1018066 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:21:25.519178 1018066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:21:25.550435 1018066 system_svc.go:56] duration metric: took 31.312764ms WaitForService to wait for kubelet
	I1018 13:21:25.550526 1018066 kubeadm.go:586] duration metric: took 7.294165696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:21:25.550563 1018066 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:21:25.554228 1018066 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:21:25.554270 1018066 node_conditions.go:123] node cpu capacity is 2
	I1018 13:21:25.554282 1018066 node_conditions.go:105] duration metric: took 3.691727ms to run NodePressure ...
	I1018 13:21:25.554296 1018066 start.go:241] waiting for startup goroutines ...
	I1018 13:21:25.554304 1018066 start.go:246] waiting for cluster config update ...
	I1018 13:21:25.554320 1018066 start.go:255] writing updated cluster config ...
	I1018 13:21:25.554638 1018066 ssh_runner.go:195] Run: rm -f paused
	I1018 13:21:25.559405 1018066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:21:25.564469 1018066 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lqv5k" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 13:21:27.570339 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:29.570611 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:32.071286 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:34.570374 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:36.571636 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:38.571809 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:41.086085 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:43.571569 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:45.572920 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:47.576955 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:50.072308 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:52.573411 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:55.071244 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	W1018 13:21:57.071910 1018066 pod_ready.go:104] pod "coredns-5dd5756b68-lqv5k" is not "Ready", error: <nil>
	I1018 13:21:58.070924 1018066 pod_ready.go:94] pod "coredns-5dd5756b68-lqv5k" is "Ready"
	I1018 13:21:58.070956 1018066 pod_ready.go:86] duration metric: took 32.506456695s for pod "coredns-5dd5756b68-lqv5k" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.075904 1018066 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.084449 1018066 pod_ready.go:94] pod "etcd-old-k8s-version-460322" is "Ready"
	I1018 13:21:58.084484 1018066 pod_ready.go:86] duration metric: took 8.546442ms for pod "etcd-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.090531 1018066 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.106174 1018066 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-460322" is "Ready"
	I1018 13:21:58.106257 1018066 pod_ready.go:86] duration metric: took 15.644239ms for pod "kube-apiserver-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.114146 1018066 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.269070 1018066 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-460322" is "Ready"
	I1018 13:21:58.269100 1018066 pod_ready.go:86] duration metric: took 154.876268ms for pod "kube-controller-manager-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.469207 1018066 pod_ready.go:83] waiting for pod "kube-proxy-r24jz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:58.868813 1018066 pod_ready.go:94] pod "kube-proxy-r24jz" is "Ready"
	I1018 13:21:58.868841 1018066 pod_ready.go:86] duration metric: took 399.609596ms for pod "kube-proxy-r24jz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:59.068988 1018066 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:59.468853 1018066 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-460322" is "Ready"
	I1018 13:21:59.468884 1018066 pod_ready.go:86] duration metric: took 399.870095ms for pod "kube-scheduler-old-k8s-version-460322" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:21:59.468897 1018066 pod_ready.go:40] duration metric: took 33.909455259s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:21:59.524954 1018066 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 13:21:59.528285 1018066 out.go:203] 
	W1018 13:21:59.531249 1018066 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 13:21:59.534064 1018066 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 13:21:59.537045 1018066 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-460322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.108034141Z" level=info msg="Created container fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w/dashboard-metrics-scraper" id=de2c2cc3-a7d2-4244-91ac-89d536de8bef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.110439128Z" level=info msg="Starting container: fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6" id=7a0188fa-76c4-4de2-af0b-c5a22a210147 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.113802506Z" level=info msg="Started container" PID=1637 containerID=fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w/dashboard-metrics-scraper id=7a0188fa-76c4-4de2-af0b-c5a22a210147 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072
	Oct 18 13:21:56 old-k8s-version-460322 conmon[1635]: conmon fddc01980ddd0742411f <ninfo>: container 1637 exited with status 1
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.688001274Z" level=info msg="Removing container: 02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83" id=b53596ae-7c19-4a58-9934-828b6f6b8ebf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.695938569Z" level=info msg="Error loading conmon cgroup of container 02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83: cgroup deleted" id=b53596ae-7c19-4a58-9934-828b6f6b8ebf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:21:56 old-k8s-version-460322 crio[649]: time="2025-10-18T13:21:56.702537646Z" level=info msg="Removed container 02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w/dashboard-metrics-scraper" id=b53596ae-7c19-4a58-9934-828b6f6b8ebf name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.262058681Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.268182676Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.268231415Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.268258976Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.271724059Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.271771165Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.271794016Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.275129686Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.275168136Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.275191299Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.278555293Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.278591831Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.278614674Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.28236483Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.282402992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.282428765Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.28588364Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:22:04 old-k8s-version-460322 crio[649]: time="2025-10-18T13:22:04.285920563Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	fddc01980ddd0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   d8707686676c6       dashboard-metrics-scraper-5f989dc9cf-xgv8w       kubernetes-dashboard
	984906cf5e533       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   dc83761a69835       storage-provisioner                              kube-system
	25cfc40476d08       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   90cabe408fb59       kubernetes-dashboard-8694d4445c-sxt4n            kubernetes-dashboard
	6733299c34fd3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   c3c8e250f66ee       coredns-5dd5756b68-lqv5k                         kube-system
	032f209c6105e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   1d4dbb929fd4c       busybox                                          default
	6bc8a18120646       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   35b973d294fc6       kube-proxy-r24jz                                 kube-system
	64aa55f28d941       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   dc83761a69835       storage-provisioner                              kube-system
	326284bdad41b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   08dac1d743165       kindnet-q2sfv                                    kube-system
	9d31a92b9b427       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   749662b674112       kube-apiserver-old-k8s-version-460322            kube-system
	9dfa74e0f8e96       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   3d045ec2ed442       kube-scheduler-old-k8s-version-460322            kube-system
	ec327421c09b3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   6ba364d2b86a7       etcd-old-k8s-version-460322                      kube-system
	263befedb5a5d       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   9928531d61493       kube-controller-manager-old-k8s-version-460322   kube-system
	
	
	==> coredns [6733299c34fd341f383ae390c143b2befff44dd81eefe87b85616a104cb5f5b6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35522 - 7450 "HINFO IN 7711608374269385620.3571844459731890540. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005223089s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-460322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-460322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=old-k8s-version-460322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_20_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:20:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-460322
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:22:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:21:54 +0000   Sat, 18 Oct 2025 13:20:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-460322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                08120b82-a464-4f81-9944-a22a9025117c
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-lqv5k                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-460322                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-q2sfv                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-460322             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-460322    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-r24jz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-460322             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-xgv8w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-sxt4n             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 52s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-460322 event: Registered Node old-k8s-version-460322 in Controller
	  Normal  NodeReady                95s                    kubelet          Node old-k8s-version-460322 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node old-k8s-version-460322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node old-k8s-version-460322 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                    node-controller  Node old-k8s-version-460322 event: Registered Node old-k8s-version-460322 in Controller
	
	
	==> dmesg <==
	[Oct18 12:53] overlayfs: idmapped layers are currently not supported
	[Oct18 12:57] overlayfs: idmapped layers are currently not supported
	[Oct18 12:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ec327421c09b3321f510dea0dcf341778ada51b0ee5eaedd25bc29f02c72aecc] <==
	{"level":"info","ts":"2025-10-18T13:21:18.544485Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T13:21:18.544493Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T13:21:18.554197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-18T13:21:18.554371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-18T13:21:18.554502Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:21:18.554531Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T13:21:18.632838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T13:21:18.632987Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T13:21:18.63307Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T13:21:18.63511Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T13:21:18.635193Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T13:21:20.192884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T13:21:20.192999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T13:21:20.19305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-18T13:21:20.193089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.193121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.193156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.193187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-18T13:21:20.197391Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-460322 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T13:21:20.197494Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T13:21:20.198499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T13:21:20.200157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T13:21:20.201086Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-18T13:21:20.220149Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T13:21:20.220246Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:22:17 up  5:04,  0 user,  load average: 2.35, 2.91, 2.35
	Linux old-k8s-version-460322 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [326284bdad41b74cf178475229d927879679dce262e83729e460ce45b0997281] <==
	I1018 13:21:24.013694       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:21:24.014431       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:21:24.014594       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:21:24.014608       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:21:24.014621       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:21:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:21:24.259217       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:21:24.259238       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:21:24.259246       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:21:24.309737       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:21:54.259953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:21:54.259953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:21:54.260188       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:21:54.310602       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:21:55.760034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:21:55.760063       1 metrics.go:72] Registering metrics
	I1018 13:21:55.760127       1 controller.go:711] "Syncing nftables rules"
	I1018 13:22:04.261176       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:22:04.261232       1 main.go:301] handling current node
	I1018 13:22:14.263828       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:22:14.263878       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d31a92b9b427ca701355f1a81018ab66a25b0fb391e92ef17e44702f99fb84d] <==
	I1018 13:21:23.271285       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 13:21:23.272189       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 13:21:23.288796       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 13:21:23.288885       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 13:21:23.289353       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 13:21:23.291183       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 13:21:23.295086       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:21:23.335155       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 13:21:23.358650       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 13:21:23.359912       1 aggregator.go:166] initial CRD sync complete...
	I1018 13:21:23.359938       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 13:21:23.359945       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:21:23.359951       1 cache.go:39] Caches are synced for autoregister controller
	E1018 13:21:23.373995       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:21:23.960362       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:21:25.246519       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 13:21:25.316929       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 13:21:25.353853       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:21:25.367103       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:21:25.377046       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 13:21:25.455521       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.114.193"}
	I1018 13:21:25.475562       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.32.233"}
	I1018 13:21:35.670580       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 13:21:35.817844       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:21:35.861228       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [263befedb5a5df101913b0e93669684d10266a6e061894118ce4fb426a45def8] <==
	I1018 13:21:35.779010       1 range_allocator.go:174] "Sending events to api server"
	I1018 13:21:35.779040       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1018 13:21:35.779065       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1018 13:21:35.779071       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1018 13:21:35.781053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.518148ms"
	I1018 13:21:35.808278       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 13:21:35.825397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.982674ms"
	I1018 13:21:35.838711       1 shared_informer.go:318] Caches are synced for endpoint
	I1018 13:21:35.838926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.787052ms"
	I1018 13:21:35.839040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.12µs"
	I1018 13:21:35.852380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.852687ms"
	I1018 13:21:35.852519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.099µs"
	I1018 13:21:35.890253       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 13:21:36.189945       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 13:21:36.189977       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 13:21:36.253377       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 13:21:41.673176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.001697ms"
	I1018 13:21:41.673987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.799µs"
	I1018 13:21:45.677490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.311µs"
	I1018 13:21:46.680744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.495µs"
	I1018 13:21:47.673316       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.938µs"
	I1018 13:21:56.712611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.391µs"
	I1018 13:21:57.725209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.07373ms"
	I1018 13:21:57.725397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.918µs"
	I1018 13:22:06.092928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.6µs"
	
	
	==> kube-proxy [6bc8a1812064618e157047d140bb8c58f735c688349bfaef61844d1c8c1772e9] <==
	I1018 13:21:24.126920       1 server_others.go:69] "Using iptables proxy"
	I1018 13:21:24.230136       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1018 13:21:24.554878       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:21:24.586124       1 server_others.go:152] "Using iptables Proxier"
	I1018 13:21:24.586176       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 13:21:24.586191       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 13:21:24.586216       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 13:21:24.586458       1 server.go:846] "Version info" version="v1.28.0"
	I1018 13:21:24.586469       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:21:24.590411       1 config.go:188] "Starting service config controller"
	I1018 13:21:24.590433       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 13:21:24.590490       1 config.go:97] "Starting endpoint slice config controller"
	I1018 13:21:24.590496       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 13:21:24.590848       1 config.go:315] "Starting node config controller"
	I1018 13:21:24.590854       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 13:21:24.692526       1 shared_informer.go:318] Caches are synced for node config
	I1018 13:21:24.692568       1 shared_informer.go:318] Caches are synced for service config
	I1018 13:21:24.692603       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9dfa74e0f8e961fa08a392e31705e4b20f7d53bd00926dc3ca15aa9439d3e0d4] <==
	I1018 13:21:20.043341       1 serving.go:348] Generated self-signed cert in-memory
	W1018 13:21:22.984254       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:21:22.984291       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:21:22.984302       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:21:22.984308       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:21:23.286270       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 13:21:23.286389       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:21:23.292887       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 13:21:23.298891       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:21:23.299086       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 13:21:23.298906       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 13:21:23.399188       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.764589     778 topology_manager.go:215] "Topology Admit Handler" podUID="ee1a1889-ff95-440a-b07e-321beed40111" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-sxt4n"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.775487     778 topology_manager.go:215] "Topology Admit Handler" podUID="61691c8e-05ef-4921-9da8-20bc20887783" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-xgv8w"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845558     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlw89\" (UniqueName: \"kubernetes.io/projected/61691c8e-05ef-4921-9da8-20bc20887783-kube-api-access-vlw89\") pod \"dashboard-metrics-scraper-5f989dc9cf-xgv8w\" (UID: \"61691c8e-05ef-4921-9da8-20bc20887783\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845646     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s2j\" (UniqueName: \"kubernetes.io/projected/ee1a1889-ff95-440a-b07e-321beed40111-kube-api-access-w4s2j\") pod \"kubernetes-dashboard-8694d4445c-sxt4n\" (UID: \"ee1a1889-ff95-440a-b07e-321beed40111\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sxt4n"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845787     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/61691c8e-05ef-4921-9da8-20bc20887783-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-xgv8w\" (UID: \"61691c8e-05ef-4921-9da8-20bc20887783\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w"
	Oct 18 13:21:35 old-k8s-version-460322 kubelet[778]: I1018 13:21:35.845868     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ee1a1889-ff95-440a-b07e-321beed40111-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-sxt4n\" (UID: \"ee1a1889-ff95-440a-b07e-321beed40111\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sxt4n"
	Oct 18 13:21:36 old-k8s-version-460322 kubelet[778]: W1018 13:21:36.126600     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a47757ca466398ca77b5e71da2eb665c10ce5ac8fff67fb926f0d6aa1d496884/crio-d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072 WatchSource:0}: Error finding container d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072: Status 404 returned error can't find the container with id d8707686676c6bbc9501b6ac71f27ba3d5f26560b66edb23f915dcf1d8f54072
	Oct 18 13:21:45 old-k8s-version-460322 kubelet[778]: I1018 13:21:45.653216     778 scope.go:117] "RemoveContainer" containerID="c64fa84199249d5f132d8ccba5e90d1ad91c8fc3967c44d08cd6d55af67b6cbc"
	Oct 18 13:21:45 old-k8s-version-460322 kubelet[778]: I1018 13:21:45.675047     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sxt4n" podStartSLOduration=5.764225119 podCreationTimestamp="2025-10-18 13:21:35 +0000 UTC" firstStartedPulling="2025-10-18 13:21:36.119429117 +0000 UTC m=+18.836373222" lastFinishedPulling="2025-10-18 13:21:41.029517157 +0000 UTC m=+23.746461263" observedRunningTime="2025-10-18 13:21:41.658467442 +0000 UTC m=+24.375411564" watchObservedRunningTime="2025-10-18 13:21:45.67431316 +0000 UTC m=+28.391257274"
	Oct 18 13:21:46 old-k8s-version-460322 kubelet[778]: I1018 13:21:46.655875     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:46 old-k8s-version-460322 kubelet[778]: I1018 13:21:46.656286     778 scope.go:117] "RemoveContainer" containerID="c64fa84199249d5f132d8ccba5e90d1ad91c8fc3967c44d08cd6d55af67b6cbc"
	Oct 18 13:21:46 old-k8s-version-460322 kubelet[778]: E1018 13:21:46.656867     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:21:47 old-k8s-version-460322 kubelet[778]: I1018 13:21:47.659097     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:47 old-k8s-version-460322 kubelet[778]: E1018 13:21:47.659540     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:21:54 old-k8s-version-460322 kubelet[778]: I1018 13:21:54.676469     778 scope.go:117] "RemoveContainer" containerID="64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: I1018 13:21:56.078506     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: I1018 13:21:56.685234     778 scope.go:117] "RemoveContainer" containerID="02d4e56a61027abceb6805aabad04cc637b3af6073b11ccf5ba7f73aa780dc83"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: I1018 13:21:56.685512     778 scope.go:117] "RemoveContainer" containerID="fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	Oct 18 13:21:56 old-k8s-version-460322 kubelet[778]: E1018 13:21:56.685903     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:22:06 old-k8s-version-460322 kubelet[778]: I1018 13:22:06.078783     778 scope.go:117] "RemoveContainer" containerID="fddc01980ddd0742411f781e539a191ef7b6d8b2acf68013521650ddacdd00a6"
	Oct 18 13:22:06 old-k8s-version-460322 kubelet[778]: E1018 13:22:06.079630     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-xgv8w_kubernetes-dashboard(61691c8e-05ef-4921-9da8-20bc20887783)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-xgv8w" podUID="61691c8e-05ef-4921-9da8-20bc20887783"
	Oct 18 13:22:11 old-k8s-version-460322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:22:11 old-k8s-version-460322 kubelet[778]: I1018 13:22:11.839118     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 13:22:11 old-k8s-version-460322 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:22:11 old-k8s-version-460322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [25cfc40476d08f879ec09d886ec981c65e17c36cf0044db936682dfbd1c11cf4] <==
	2025/10/18 13:21:41 Using namespace: kubernetes-dashboard
	2025/10/18 13:21:41 Using in-cluster config to connect to apiserver
	2025/10/18 13:21:41 Using secret token for csrf signing
	2025/10/18 13:21:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:21:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:21:41 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 13:21:41 Generating JWE encryption key
	2025/10/18 13:21:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:21:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:21:41 Initializing JWE encryption key from synchronized object
	2025/10/18 13:21:41 Creating in-cluster Sidecar client
	2025/10/18 13:21:41 Serving insecurely on HTTP port: 9090
	2025/10/18 13:21:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:22:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:21:41 Starting overwatch
	
	
	==> storage-provisioner [64aa55f28d9419099756bfacaed32ffffed8b17abb9f6e4d50f6b4f1195c16b8] <==
	I1018 13:21:23.975223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:21:53.987044       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [984906cf5e5334c75f8c765a6f2db0d15bb3c67c8dd26c2ea22afe57e46c2ccd] <==
	I1018 13:21:54.728859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:21:54.742145       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:21:54.742265       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 13:22:12.140528       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:22:12.140935       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4894fd2-3668-4ade-932b-17a0a4c87466", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-460322_ba0c30cf-f84d-4c81-8eff-31aee766928b became leader
	I1018 13:22:12.141017       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460322_ba0c30cf-f84d-4c81-8eff-31aee766928b!
	I1018 13:22:12.241475       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460322_ba0c30cf-f84d-4c81-8eff-31aee766928b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460322 -n old-k8s-version-460322
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460322 -n old-k8s-version-460322: exit status 2 (384.187967ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-460322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (267.793964ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:23:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-779884 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-779884 describe deploy/metrics-server -n kube-system: exit status 1 (81.703321ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-779884 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-779884
helpers_test.go:243: (dbg) docker inspect no-preload-779884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45",
	        "Created": "2025-10-18T13:22:22.245395401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:22:22.318790492Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/hostname",
	        "HostsPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/hosts",
	        "LogPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45-json.log",
	        "Name": "/no-preload-779884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-779884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-779884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45",
	                "LowerDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-779884",
	                "Source": "/var/lib/docker/volumes/no-preload-779884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-779884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-779884",
	                "name.minikube.sigs.k8s.io": "no-preload-779884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df8f14d00e68a135c6c3964137dce7e7a3fdd9df3712a3e3d90de3dfca469e73",
	            "SandboxKey": "/var/run/docker/netns/df8f14d00e68",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34167"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34168"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34171"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34169"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34170"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-779884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:d1:6d:14:d7:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "939cb65a3289c015d5d4b8e7692a9fb9fd1181110d0a4789eecbc7983e7821f8",
	                    "EndpointID": "61d08f4f8f62fb87fd7f8efb4501814430f8474d73db5a1d3dbc164e5ef1b090",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-779884",
	                        "78baa17fea0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-779884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-779884 logs -n 25: (1.165330976s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-633218 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ ssh     │ -p cilium-633218 sudo crio config                                                                                                                                                                                                             │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │                     │
	│ delete  │ -p cilium-633218                                                                                                                                                                                                                              │ cilium-633218             │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p force-systemd-env-914730 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ force-systemd-flag-882807 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ delete  │ -p force-systemd-flag-882807                                                                                                                                                                                                                  │ force-systemd-flag-882807 │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:18 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-076887    │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p force-systemd-env-914730                                                                                                                                                                                                                   │ force-systemd-env-914730  │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ cert-options-179041 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ -p cert-options-179041 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	│ stop    │ -p old-k8s-version-460322 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │ 18 Oct 25 13:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884         │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-076887    │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-779884         │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:22:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:22:23.815433 1022332 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:22:23.815549 1022332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:22:23.815553 1022332 out.go:374] Setting ErrFile to fd 2...
	I1018 13:22:23.815557 1022332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:22:23.815892 1022332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:22:23.816302 1022332 out.go:368] Setting JSON to false
	I1018 13:22:23.817193 1022332 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18296,"bootTime":1760775448,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:22:23.817248 1022332 start.go:141] virtualization:  
	I1018 13:22:23.821168 1022332 out.go:179] * [cert-expiration-076887] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:22:23.825200 1022332 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:22:23.825458 1022332 notify.go:220] Checking for updates...
	I1018 13:22:23.831070 1022332 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:22:23.833938 1022332 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:22:23.836923 1022332 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:22:23.839752 1022332 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:22:23.842574 1022332 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:22:23.845921 1022332 config.go:182] Loaded profile config "cert-expiration-076887": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:22:23.846427 1022332 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:22:23.891716 1022332 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:22:23.891823 1022332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:22:23.984975 1022332 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:true NGoroutines:71 SystemTime:2025-10-18 13:22:23.96885001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:22:23.985075 1022332 docker.go:318] overlay module found
	I1018 13:22:23.988998 1022332 out.go:179] * Using the docker driver based on existing profile
	I1018 13:22:23.991863 1022332 start.go:305] selected driver: docker
	I1018 13:22:23.991874 1022332 start.go:925] validating driver "docker" against &{Name:cert-expiration-076887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-076887 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:22:23.991966 1022332 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:22:23.992723 1022332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:22:24.116116 1022332 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:true NGoroutines:71 SystemTime:2025-10-18 13:22:24.105504376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:22:24.116476 1022332 cni.go:84] Creating CNI manager for ""
	I1018 13:22:24.116537 1022332 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:22:24.116578 1022332 start.go:349] cluster config:
	{Name:cert-expiration-076887 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-076887 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 13:22:24.120038 1022332 out.go:179] * Starting "cert-expiration-076887" primary control-plane node in "cert-expiration-076887" cluster
	I1018 13:22:24.123041 1022332 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:22:24.126023 1022332 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:22:24.128914 1022332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:22:24.128980 1022332 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:22:24.129009 1022332 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:22:24.129021 1022332 cache.go:58] Caching tarball of preloaded images
	I1018 13:22:24.129109 1022332 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:22:24.129118 1022332 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:22:24.129225 1022332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/cert-expiration-076887/config.json ...
	I1018 13:22:24.150323 1022332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:22:24.150336 1022332 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:22:24.150357 1022332 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:22:24.150389 1022332 start.go:360] acquireMachinesLock for cert-expiration-076887: {Name:mkde9e5aec173126d7ce8a0c4fcb1081dca0666e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:22:24.150459 1022332 start.go:364] duration metric: took 53.096µs to acquireMachinesLock for "cert-expiration-076887"
	I1018 13:22:24.150478 1022332 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:22:24.150483 1022332 fix.go:54] fixHost starting: 
	I1018 13:22:24.150769 1022332 cli_runner.go:164] Run: docker container inspect cert-expiration-076887 --format={{.State.Status}}
	I1018 13:22:24.168704 1022332 fix.go:112] recreateIfNeeded on cert-expiration-076887: state=Running err=<nil>
	W1018 13:22:24.168725 1022332 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:22:21.287972 1021653 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:22:21.288269 1021653 start.go:159] libmachine.API.Create for "no-preload-779884" (driver="docker")
	I1018 13:22:21.288333 1021653 client.go:168] LocalClient.Create starting
	I1018 13:22:21.288417 1021653 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:22:21.288505 1021653 main.go:141] libmachine: Decoding PEM data...
	I1018 13:22:21.288531 1021653 main.go:141] libmachine: Parsing certificate...
	I1018 13:22:21.288606 1021653 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:22:21.288629 1021653 main.go:141] libmachine: Decoding PEM data...
	I1018 13:22:21.288640 1021653 main.go:141] libmachine: Parsing certificate...
	I1018 13:22:21.289013 1021653 cli_runner.go:164] Run: docker network inspect no-preload-779884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:22:21.314075 1021653 cli_runner.go:211] docker network inspect no-preload-779884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:22:21.314171 1021653 network_create.go:284] running [docker network inspect no-preload-779884] to gather additional debugging logs...
	I1018 13:22:21.314193 1021653 cli_runner.go:164] Run: docker network inspect no-preload-779884
	W1018 13:22:21.330487 1021653 cli_runner.go:211] docker network inspect no-preload-779884 returned with exit code 1
	I1018 13:22:21.330540 1021653 network_create.go:287] error running [docker network inspect no-preload-779884]: docker network inspect no-preload-779884: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-779884 not found
	I1018 13:22:21.330556 1021653 network_create.go:289] output of [docker network inspect no-preload-779884]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-779884 not found
	
	** /stderr **
	I1018 13:22:21.330675 1021653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:22:21.347532 1021653 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:22:21.348030 1021653 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:22:21.348321 1021653 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:22:21.348609 1021653 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-30b55a9e8dbd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:38:91:ed:1b:fa} reservation:<nil>}
	I1018 13:22:21.349056 1021653 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c27cb0}
	I1018 13:22:21.349093 1021653 network_create.go:124] attempt to create docker network no-preload-779884 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 13:22:21.349184 1021653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-779884 no-preload-779884
	I1018 13:22:21.427546 1021653 network_create.go:108] docker network no-preload-779884 192.168.85.0/24 created
	I1018 13:22:21.427622 1021653 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-779884" container
	I1018 13:22:21.427770 1021653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:22:21.451189 1021653 cli_runner.go:164] Run: docker volume create no-preload-779884 --label name.minikube.sigs.k8s.io=no-preload-779884 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:22:21.473727 1021653 oci.go:103] Successfully created a docker volume no-preload-779884
	I1018 13:22:21.473825 1021653 cli_runner.go:164] Run: docker run --rm --name no-preload-779884-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-779884 --entrypoint /usr/bin/test -v no-preload-779884:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:22:21.596765 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 13:22:21.607822 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 13:22:21.625520 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 13:22:21.625995 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 13:22:21.643573 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 13:22:21.649532 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 13:22:21.669822 1021653 cache.go:162] opening:  /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 13:22:21.703619 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 13:22:21.703731 1021653 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 441.62195ms
	I1018 13:22:21.703759 1021653 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 13:22:22.159410 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 13:22:22.159438 1021653 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 901.309528ms
	I1018 13:22:22.159452 1021653 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 13:22:22.163556 1021653 oci.go:107] Successfully prepared a docker volume no-preload-779884
	I1018 13:22:22.163599 1021653 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 13:22:22.163777 1021653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:22:22.163894 1021653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:22:22.227767 1021653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-779884 --name no-preload-779884 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-779884 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-779884 --network no-preload-779884 --ip 192.168.85.2 --volume no-preload-779884:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:22:22.640268 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 13:22:22.640344 1021653 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.376746445s
	I1018 13:22:22.640384 1021653 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 13:22:22.650735 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 13:22:22.650809 1021653 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.387013223s
	I1018 13:22:22.650835 1021653 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 13:22:22.692436 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 13:22:22.692463 1021653 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.431102164s
	I1018 13:22:22.692477 1021653 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 13:22:22.721248 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 13:22:22.721279 1021653 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.46017417s
	I1018 13:22:22.721291 1021653 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 13:22:22.733283 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Running}}
	I1018 13:22:22.758518 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:22:22.810997 1021653 cli_runner.go:164] Run: docker exec no-preload-779884 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:22:22.878169 1021653 oci.go:144] the created container "no-preload-779884" has a running status.
	I1018 13:22:22.878196 1021653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa...
	I1018 13:22:23.909431 1021653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:22:23.954459 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:22:23.979248 1021653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:22:23.979268 1021653 kic_runner.go:114] Args: [docker exec --privileged no-preload-779884 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:22:24.049719 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:22:24.070533 1021653 cache.go:157] /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 13:22:24.070563 1021653 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.807234966s
	I1018 13:22:24.070576 1021653 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 13:22:24.070588 1021653 cache.go:87] Successfully saved all images to host disk.
	I1018 13:22:24.072253 1021653 machine.go:93] provisionDockerMachine start ...
	I1018 13:22:24.072353 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:24.118775 1021653 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:24.119577 1021653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34167 <nil> <nil>}
	I1018 13:22:24.119598 1021653 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:22:24.120456 1021653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:22:24.172234 1022332 out.go:252] * Updating the running docker "cert-expiration-076887" container ...
	I1018 13:22:24.172266 1022332 machine.go:93] provisionDockerMachine start ...
	I1018 13:22:24.172366 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:24.191067 1022332 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:24.191442 1022332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34147 <nil> <nil>}
	I1018 13:22:24.191451 1022332 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:22:24.343962 1022332 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-076887
	
	I1018 13:22:24.343979 1022332 ubuntu.go:182] provisioning hostname "cert-expiration-076887"
	I1018 13:22:24.344073 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:24.373363 1022332 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:24.373673 1022332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34147 <nil> <nil>}
	I1018 13:22:24.373682 1022332 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-076887 && echo "cert-expiration-076887" | sudo tee /etc/hostname
	I1018 13:22:24.582201 1022332 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-076887
	
	I1018 13:22:24.582306 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:24.608994 1022332 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:24.609973 1022332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34147 <nil> <nil>}
	I1018 13:22:24.609992 1022332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-076887' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-076887/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-076887' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:22:24.784819 1022332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:22:24.784837 1022332 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:22:24.784853 1022332 ubuntu.go:190] setting up certificates
	I1018 13:22:24.784877 1022332 provision.go:84] configureAuth start
	I1018 13:22:24.784944 1022332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-076887
	I1018 13:22:24.802894 1022332 provision.go:143] copyHostCerts
	I1018 13:22:24.802960 1022332 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:22:24.802998 1022332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:22:24.803464 1022332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:22:24.803606 1022332 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:22:24.803611 1022332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:22:24.803638 1022332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:22:24.803745 1022332 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:22:24.803749 1022332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:22:24.803775 1022332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:22:24.803829 1022332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-076887 san=[127.0.0.1 192.168.76.2 cert-expiration-076887 localhost minikube]
	I1018 13:22:25.075767 1022332 provision.go:177] copyRemoteCerts
	I1018 13:22:25.075833 1022332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:22:25.075881 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:25.093817 1022332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34147 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/cert-expiration-076887/id_rsa Username:docker}
	I1018 13:22:25.200723 1022332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 13:22:25.234965 1022332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 13:22:25.255337 1022332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:22:25.276803 1022332 provision.go:87] duration metric: took 491.902587ms to configureAuth
	I1018 13:22:25.276821 1022332 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:22:25.277005 1022332 config.go:182] Loaded profile config "cert-expiration-076887": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:22:25.277118 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:25.294365 1022332 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:25.294669 1022332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34147 <nil> <nil>}
	I1018 13:22:25.294681 1022332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:22:27.271176 1021653 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-779884
	
	I1018 13:22:27.271203 1021653 ubuntu.go:182] provisioning hostname "no-preload-779884"
	I1018 13:22:27.271269 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:27.290890 1021653 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:27.291217 1021653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34167 <nil> <nil>}
	I1018 13:22:27.291234 1021653 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-779884 && echo "no-preload-779884" | sudo tee /etc/hostname
	I1018 13:22:27.449377 1021653 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-779884
	
	I1018 13:22:27.449548 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:27.469083 1021653 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:27.469398 1021653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34167 <nil> <nil>}
	I1018 13:22:27.469418 1021653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-779884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-779884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-779884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:22:27.615952 1021653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:22:27.615982 1021653 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:22:27.616027 1021653 ubuntu.go:190] setting up certificates
	I1018 13:22:27.616041 1021653 provision.go:84] configureAuth start
	I1018 13:22:27.616107 1021653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-779884
	I1018 13:22:27.633934 1021653 provision.go:143] copyHostCerts
	I1018 13:22:27.634005 1021653 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:22:27.634018 1021653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:22:27.634101 1021653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:22:27.634207 1021653 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:22:27.634221 1021653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:22:27.634248 1021653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:22:27.634309 1021653 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:22:27.634319 1021653 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:22:27.634343 1021653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:22:27.634402 1021653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.no-preload-779884 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-779884]
	I1018 13:22:28.327152 1021653 provision.go:177] copyRemoteCerts
	I1018 13:22:28.327231 1021653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:22:28.327275 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:28.344804 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:28.447947 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:22:28.466530 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 13:22:28.484976 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:22:28.503944 1021653 provision.go:87] duration metric: took 887.875511ms to configureAuth
	I1018 13:22:28.503973 1021653 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:22:28.504175 1021653 config.go:182] Loaded profile config "no-preload-779884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:22:28.504288 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:28.522161 1021653 main.go:141] libmachine: Using SSH client type: native
	I1018 13:22:28.522473 1021653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34167 <nil> <nil>}
	I1018 13:22:28.522493 1021653 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:22:28.849750 1021653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:22:28.849776 1021653 machine.go:96] duration metric: took 4.777499163s to provisionDockerMachine
	I1018 13:22:28.849786 1021653 client.go:171] duration metric: took 7.561440006s to LocalClient.Create
	I1018 13:22:28.849805 1021653 start.go:167] duration metric: took 7.561537805s to libmachine.API.Create "no-preload-779884"
	I1018 13:22:28.849813 1021653 start.go:293] postStartSetup for "no-preload-779884" (driver="docker")
	I1018 13:22:28.849823 1021653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:22:28.849900 1021653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:22:28.849945 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:28.867001 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:28.971942 1021653 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:22:28.975123 1021653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:22:28.975154 1021653 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:22:28.975165 1021653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:22:28.975223 1021653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:22:28.975308 1021653 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:22:28.975414 1021653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:22:28.983865 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:22:29.003062 1021653 start.go:296] duration metric: took 153.233506ms for postStartSetup
	I1018 13:22:29.003495 1021653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-779884
	I1018 13:22:29.023110 1021653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/config.json ...
	I1018 13:22:29.023396 1021653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:22:29.023443 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:29.041374 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:29.145333 1021653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:22:29.150537 1021653 start.go:128] duration metric: took 7.86615099s to createHost
	I1018 13:22:29.150561 1021653 start.go:83] releasing machines lock for "no-preload-779884", held for 7.866277876s
	I1018 13:22:29.150631 1021653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-779884
	I1018 13:22:29.170416 1021653 ssh_runner.go:195] Run: cat /version.json
	I1018 13:22:29.170489 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:29.170425 1021653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:22:29.170625 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:29.190258 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:29.205509 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:29.408594 1021653 ssh_runner.go:195] Run: systemctl --version
	I1018 13:22:29.415102 1021653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:22:29.449977 1021653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:22:29.454541 1021653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:22:29.454614 1021653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:22:29.485439 1021653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:22:29.485510 1021653 start.go:495] detecting cgroup driver to use...
	I1018 13:22:29.485560 1021653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:22:29.485636 1021653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:22:29.504959 1021653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:22:29.518194 1021653 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:22:29.518301 1021653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:22:29.536270 1021653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:22:29.555776 1021653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:22:29.680140 1021653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:22:29.797166 1021653 docker.go:234] disabling docker service ...
	I1018 13:22:29.797234 1021653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:22:29.820172 1021653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:22:29.833139 1021653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:22:29.957604 1021653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:22:30.128974 1021653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:22:30.143421 1021653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:22:30.161014 1021653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:22:30.161111 1021653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.172536 1021653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:22:30.172645 1021653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.183828 1021653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.193824 1021653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.203559 1021653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:22:30.213197 1021653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.222525 1021653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.236843 1021653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:30.246676 1021653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:22:30.256195 1021653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:22:30.263953 1021653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:22:30.378374 1021653 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:22:30.533046 1021653 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:22:30.533172 1021653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:22:30.539928 1021653 start.go:563] Will wait 60s for crictl version
	I1018 13:22:30.540076 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:30.544936 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:22:30.585126 1021653 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:22:30.585280 1021653 ssh_runner.go:195] Run: crio --version
	I1018 13:22:30.620050 1021653 ssh_runner.go:195] Run: crio --version
	I1018 13:22:30.663723 1021653 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:22:30.666861 1021653 cli_runner.go:164] Run: docker network inspect no-preload-779884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:22:30.690181 1021653 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:22:30.694550 1021653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:22:30.707966 1021653 kubeadm.go:883] updating cluster {Name:no-preload-779884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-779884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:22:30.708136 1021653 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:22:30.708183 1021653 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:22:30.743959 1021653 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 13:22:30.743987 1021653 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1018 13:22:30.744113 1021653 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:30.744826 1021653 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:30.744988 1021653 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:30.745093 1021653 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:30.745179 1021653 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:30.745323 1021653 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:30.745416 1021653 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 13:22:30.745921 1021653 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:30.746113 1021653 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:30.746429 1021653 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:30.746600 1021653 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:30.746752 1021653 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:30.746906 1021653 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:30.747152 1021653 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 13:22:30.747310 1021653 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:30.747739 1021653 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:31.003062 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:31.028109 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:31.030654 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:31.031563 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:31.048809 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1018 13:22:30.677663 1022332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:22:30.677676 1022332 machine.go:96] duration metric: took 6.505402859s to provisionDockerMachine
	I1018 13:22:30.677686 1022332 start.go:293] postStartSetup for "cert-expiration-076887" (driver="docker")
	I1018 13:22:30.677697 1022332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:22:30.677778 1022332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:22:30.677828 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:30.705086 1022332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34147 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/cert-expiration-076887/id_rsa Username:docker}
	I1018 13:22:30.812298 1022332 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:22:30.816217 1022332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:22:30.816239 1022332 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:22:30.816249 1022332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:22:30.816305 1022332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:22:30.816384 1022332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:22:30.816483 1022332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:22:30.824123 1022332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:22:30.842361 1022332 start.go:296] duration metric: took 164.660911ms for postStartSetup
	I1018 13:22:30.842442 1022332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:22:30.842478 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:30.862165 1022332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34147 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/cert-expiration-076887/id_rsa Username:docker}
	I1018 13:22:30.965824 1022332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:22:30.971624 1022332 fix.go:56] duration metric: took 6.821133505s for fixHost
	I1018 13:22:30.971640 1022332 start.go:83] releasing machines lock for "cert-expiration-076887", held for 6.821173982s
	I1018 13:22:30.971738 1022332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-076887
	I1018 13:22:30.991104 1022332 ssh_runner.go:195] Run: cat /version.json
	I1018 13:22:30.991149 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:30.991424 1022332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:22:30.991470 1022332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-076887
	I1018 13:22:31.020498 1022332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34147 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/cert-expiration-076887/id_rsa Username:docker}
	I1018 13:22:31.032720 1022332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34147 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/cert-expiration-076887/id_rsa Username:docker}
	I1018 13:22:31.256906 1022332 ssh_runner.go:195] Run: systemctl --version
	I1018 13:22:31.266239 1022332 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:22:31.340527 1022332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:22:31.350719 1022332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:22:31.350807 1022332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:22:31.362603 1022332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:22:31.362619 1022332 start.go:495] detecting cgroup driver to use...
	I1018 13:22:31.362652 1022332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:22:31.362722 1022332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:22:31.384893 1022332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:22:31.403646 1022332 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:22:31.403725 1022332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:22:31.427503 1022332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:22:31.445012 1022332 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:22:31.677131 1022332 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:22:31.920983 1022332 docker.go:234] disabling docker service ...
	I1018 13:22:31.921041 1022332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:22:31.951533 1022332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:22:31.973376 1022332 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:22:32.259053 1022332 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:22:32.557160 1022332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:22:32.577139 1022332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:22:32.608488 1022332 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:22:32.608543 1022332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.628808 1022332 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:22:32.628878 1022332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.648786 1022332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.661067 1022332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.676280 1022332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:22:32.690295 1022332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.702465 1022332 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.713072 1022332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:22:32.728890 1022332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:22:32.739341 1022332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:22:32.751315 1022332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:22:33.062937 1022332 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:22:31.053960 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:31.109042 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:31.116372 1021653 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1018 13:22:31.116410 1021653 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:31.116462 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.248095 1021653 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1018 13:22:31.248130 1021653 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:31.248178 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.248237 1021653 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1018 13:22:31.248250 1021653 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:31.248270 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.248341 1021653 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1018 13:22:31.248355 1021653 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:31.248375 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.265117 1021653 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1018 13:22:31.265154 1021653 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:31.265202 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.265264 1021653 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1018 13:22:31.265277 1021653 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1018 13:22:31.265297 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.276530 1021653 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1018 13:22:31.276577 1021653 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:31.276625 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:31.276721 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:31.276782 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:31.276827 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:31.276898 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:31.280648 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 13:22:31.281936 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:31.449327 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:31.449840 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:31.449990 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:31.540130 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:31.540229 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:31.540292 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 13:22:31.540347 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:31.540389 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:31.543442 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 13:22:31.543523 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 13:22:31.733099 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 13:22:31.733177 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 13:22:31.733234 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 13:22:31.733295 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 13:22:31.733350 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 13:22:31.733410 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 13:22:31.733479 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 13:22:31.733525 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 13:22:31.733574 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 13:22:31.878335 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1018 13:22:31.878376 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1018 13:22:31.878459 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 13:22:31.878542 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 13:22:31.878593 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 13:22:31.878639 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 13:22:31.878692 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 13:22:31.878737 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1018 13:22:31.878785 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 13:22:31.878831 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1018 13:22:31.878877 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 13:22:31.878928 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1018 13:22:31.878979 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1018 13:22:31.878997 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1018 13:22:31.937067 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1018 13:22:31.937107 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1018 13:22:31.937154 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1018 13:22:31.937173 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1018 13:22:31.937221 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1018 13:22:31.937238 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1018 13:22:31.937276 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1018 13:22:31.937291 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1018 13:22:31.937333 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1018 13:22:31.937352 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	W1018 13:22:31.972163 1021653 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1018 13:22:31.972264 1021653 retry.go:31] will retry after 333.375049ms: ssh: rejected: connect failed (open failed)
	W1018 13:22:31.972372 1021653 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1018 13:22:31.972403 1021653 retry.go:31] will retry after 147.219788ms: ssh: rejected: connect failed (open failed)
	I1018 13:22:32.114884 1021653 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1018 13:22:32.115009 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1018 13:22:32.115118 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:32.119762 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	W1018 13:22:32.127923 1021653 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1018 13:22:32.128151 1021653 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:32.128194 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:32.213117 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:32.240978 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:32.255886 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:32.305853 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:22:32.347069 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:22:32.954927 1021653 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1018 13:22:32.955008 1021653 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:32.955093 1021653 ssh_runner.go:195] Run: which crictl
	I1018 13:22:32.955204 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1018 13:22:33.104744 1021653 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 13:22:33.104861 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 13:22:33.151729 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:34.970128 1021653 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.865216064s)
	I1018 13:22:34.970155 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1018 13:22:34.970174 1021653 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 13:22:34.970224 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 13:22:34.970297 1021653 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.818492737s)
	I1018 13:22:34.970334 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:36.112869 1021653 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.142511885s)
	I1018 13:22:36.112979 1021653 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.14273212s)
	I1018 13:22:36.113009 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 13:22:36.113022 1021653 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:22:36.113029 1021653 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 13:22:36.113156 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 13:22:37.460107 1021653 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.346906554s)
	I1018 13:22:37.460141 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 13:22:37.460160 1021653 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 13:22:37.460210 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 13:22:37.460287 1021653 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.347227008s)
	I1018 13:22:37.460315 1021653 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 13:22:37.460385 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 13:22:38.766169 1021653 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.305757071s)
	I1018 13:22:38.766204 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 13:22:38.766231 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1018 13:22:38.766381 1021653 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.306154555s)
	I1018 13:22:38.766396 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 13:22:38.766414 1021653 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1018 13:22:38.766460 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1018 13:22:40.445795 1021653 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.679308832s)
	I1018 13:22:40.445831 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 13:22:40.445863 1021653 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 13:22:40.445939 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 13:22:44.288093 1021653 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.842131511s)
	I1018 13:22:44.288119 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 13:22:44.288149 1021653 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 13:22:44.288201 1021653 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 13:22:44.856173 1021653 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21647-834184/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 13:22:44.856223 1021653 cache_images.go:124] Successfully loaded all cached images
	I1018 13:22:44.856230 1021653 cache_images.go:93] duration metric: took 14.112223709s to LoadCachedImages
	I1018 13:22:44.856242 1021653 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 13:22:44.856330 1021653 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-779884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-779884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:22:44.856409 1021653 ssh_runner.go:195] Run: crio config
	I1018 13:22:44.918251 1021653 cni.go:84] Creating CNI manager for ""
	I1018 13:22:44.918278 1021653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:22:44.918299 1021653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:22:44.918322 1021653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-779884 NodeName:no-preload-779884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:22:44.918460 1021653 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-779884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:22:44.918541 1021653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:22:44.927933 1021653 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 13:22:44.928015 1021653 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 13:22:44.936314 1021653 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1018 13:22:44.936760 1021653 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1018 13:22:44.936866 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 13:22:44.937052 1021653 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1018 13:22:44.941338 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 13:22:44.941375 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1018 13:22:45.953617 1021653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:22:45.992448 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 13:22:46.000431 1021653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 13:22:46.007073 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 13:22:46.007118 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1018 13:22:46.023967 1021653 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 13:22:46.024018 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1018 13:22:46.666836 1021653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:22:46.676136 1021653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 13:22:46.689951 1021653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:22:46.704198 1021653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 13:22:46.718263 1021653 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:22:46.722636 1021653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:22:46.732739 1021653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:22:46.855414 1021653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:22:46.871527 1021653 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884 for IP: 192.168.85.2
	I1018 13:22:46.871546 1021653 certs.go:195] generating shared ca certs ...
	I1018 13:22:46.871563 1021653 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:46.871766 1021653 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:22:46.871817 1021653 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:22:46.871831 1021653 certs.go:257] generating profile certs ...
	I1018 13:22:46.871889 1021653 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.key
	I1018 13:22:46.871906 1021653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt with IP's: []
	I1018 13:22:47.005708 1021653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt ...
	I1018 13:22:47.005743 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: {Name:mk14e4dbe2b37a0ca48cff0877949242f7b4c68b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:47.005962 1021653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.key ...
	I1018 13:22:47.005976 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.key: {Name:mk63edca21c72e0166f6715ad4a61738c3887a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:47.006074 1021653 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.key.ba61fae0
	I1018 13:22:47.006093 1021653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.crt.ba61fae0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 13:22:47.424035 1021653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.crt.ba61fae0 ...
	I1018 13:22:47.424064 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.crt.ba61fae0: {Name:mkd9932e5ad37d0df0c6b2d1da491353a32d5274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:47.424245 1021653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.key.ba61fae0 ...
	I1018 13:22:47.424259 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.key.ba61fae0: {Name:mk35bea42c26fa47360d320ef7de827e81df857d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:47.424340 1021653 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.crt.ba61fae0 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.crt
	I1018 13:22:47.424418 1021653 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.key.ba61fae0 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.key
	I1018 13:22:47.424479 1021653 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.key
	I1018 13:22:47.424491 1021653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.crt with IP's: []
	I1018 13:22:48.971827 1021653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.crt ...
	I1018 13:22:48.971864 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.crt: {Name:mk68bc6a59a79889ae3cc64162764af79bbafebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:48.972076 1021653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.key ...
	I1018 13:22:48.972091 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.key: {Name:mk0d295427c8742659deb7730c21635e5e7a608b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:22:48.972279 1021653 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:22:48.972325 1021653 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:22:48.972339 1021653 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:22:48.972368 1021653 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:22:48.972398 1021653 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:22:48.972424 1021653 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:22:48.972471 1021653 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:22:48.973121 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:22:48.993308 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:22:49.016689 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:22:49.035632 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:22:49.054250 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 13:22:49.073418 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:22:49.091616 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:22:49.110041 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 13:22:49.128279 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:22:49.147064 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:22:49.172991 1021653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:22:49.193582 1021653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:22:49.209118 1021653 ssh_runner.go:195] Run: openssl version
	I1018 13:22:49.218515 1021653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:22:49.232093 1021653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:22:49.236186 1021653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:22:49.236253 1021653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:22:49.282650 1021653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:22:49.292140 1021653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:22:49.300967 1021653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:22:49.305111 1021653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:22:49.305177 1021653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:22:49.346732 1021653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:22:49.355489 1021653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:22:49.364317 1021653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:22:49.368894 1021653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:22:49.368962 1021653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:22:49.418919 1021653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:22:49.428577 1021653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:22:49.432642 1021653 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 13:22:49.432698 1021653 kubeadm.go:400] StartCluster: {Name:no-preload-779884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-779884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:22:49.432794 1021653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:22:49.432855 1021653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:22:49.463066 1021653 cri.go:89] found id: ""
	I1018 13:22:49.463147 1021653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:22:49.471364 1021653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 13:22:49.479474 1021653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 13:22:49.479543 1021653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 13:22:49.487900 1021653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 13:22:49.487924 1021653 kubeadm.go:157] found existing configuration files:
	
	I1018 13:22:49.487974 1021653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 13:22:49.496463 1021653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 13:22:49.496577 1021653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 13:22:49.504130 1021653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 13:22:49.512147 1021653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 13:22:49.512244 1021653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 13:22:49.520365 1021653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 13:22:49.528464 1021653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 13:22:49.528534 1021653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 13:22:49.536401 1021653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 13:22:49.544926 1021653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 13:22:49.544995 1021653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 13:22:49.552743 1021653 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 13:22:49.624257 1021653 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 13:22:49.624592 1021653 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 13:22:49.694332 1021653 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 13:23:08.076546 1021653 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 13:23:08.076606 1021653 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 13:23:08.076705 1021653 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 13:23:08.076768 1021653 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 13:23:08.076808 1021653 kubeadm.go:318] OS: Linux
	I1018 13:23:08.076859 1021653 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 13:23:08.076913 1021653 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 13:23:08.076966 1021653 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 13:23:08.077019 1021653 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 13:23:08.077074 1021653 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 13:23:08.077146 1021653 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 13:23:08.077198 1021653 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 13:23:08.077252 1021653 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 13:23:08.077305 1021653 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 13:23:08.077385 1021653 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 13:23:08.077487 1021653 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 13:23:08.077583 1021653 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 13:23:08.077652 1021653 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 13:23:08.080976 1021653 out.go:252]   - Generating certificates and keys ...
	I1018 13:23:08.081103 1021653 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 13:23:08.081180 1021653 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 13:23:08.081256 1021653 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 13:23:08.081321 1021653 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 13:23:08.081391 1021653 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 13:23:08.081449 1021653 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 13:23:08.081510 1021653 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 13:23:08.081646 1021653 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-779884] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 13:23:08.081707 1021653 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 13:23:08.081836 1021653 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-779884] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 13:23:08.081908 1021653 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 13:23:08.081978 1021653 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 13:23:08.082028 1021653 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 13:23:08.082090 1021653 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 13:23:08.082147 1021653 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 13:23:08.082210 1021653 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 13:23:08.082273 1021653 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 13:23:08.082347 1021653 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 13:23:08.082410 1021653 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 13:23:08.082499 1021653 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 13:23:08.082572 1021653 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 13:23:08.085428 1021653 out.go:252]   - Booting up control plane ...
	I1018 13:23:08.085553 1021653 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 13:23:08.085652 1021653 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 13:23:08.085728 1021653 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 13:23:08.085852 1021653 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 13:23:08.085964 1021653 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 13:23:08.086113 1021653 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 13:23:08.086235 1021653 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 13:23:08.086291 1021653 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 13:23:08.086446 1021653 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 13:23:08.086560 1021653 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 13:23:08.086625 1021653 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.502750065s
	I1018 13:23:08.086730 1021653 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 13:23:08.086819 1021653 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 13:23:08.086919 1021653 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 13:23:08.087026 1021653 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 13:23:08.087145 1021653 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.961674641s
	I1018 13:23:08.087222 1021653 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.79555187s
	I1018 13:23:08.087310 1021653 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502402018s
	I1018 13:23:08.087440 1021653 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 13:23:08.087613 1021653 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 13:23:08.087759 1021653 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 13:23:08.088015 1021653 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-779884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 13:23:08.088082 1021653 kubeadm.go:318] [bootstrap-token] Using token: 23zmbo.xfn57qdj44jyl3mt
	I1018 13:23:08.093111 1021653 out.go:252]   - Configuring RBAC rules ...
	I1018 13:23:08.093246 1021653 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 13:23:08.093343 1021653 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 13:23:08.093492 1021653 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 13:23:08.093633 1021653 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 13:23:08.093760 1021653 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 13:23:08.093854 1021653 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 13:23:08.093977 1021653 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 13:23:08.094027 1021653 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 13:23:08.094099 1021653 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 13:23:08.094107 1021653 kubeadm.go:318] 
	I1018 13:23:08.094171 1021653 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 13:23:08.094178 1021653 kubeadm.go:318] 
	I1018 13:23:08.094259 1021653 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 13:23:08.094266 1021653 kubeadm.go:318] 
	I1018 13:23:08.094292 1021653 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 13:23:08.094356 1021653 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 13:23:08.094412 1021653 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 13:23:08.094422 1021653 kubeadm.go:318] 
	I1018 13:23:08.094478 1021653 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 13:23:08.094485 1021653 kubeadm.go:318] 
	I1018 13:23:08.094535 1021653 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 13:23:08.094543 1021653 kubeadm.go:318] 
	I1018 13:23:08.094597 1021653 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 13:23:08.094680 1021653 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 13:23:08.094755 1021653 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 13:23:08.094764 1021653 kubeadm.go:318] 
	I1018 13:23:08.094852 1021653 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 13:23:08.094938 1021653 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 13:23:08.094945 1021653 kubeadm.go:318] 
	I1018 13:23:08.095043 1021653 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 23zmbo.xfn57qdj44jyl3mt \
	I1018 13:23:08.095155 1021653 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 13:23:08.095179 1021653 kubeadm.go:318] 	--control-plane 
	I1018 13:23:08.095187 1021653 kubeadm.go:318] 
	I1018 13:23:08.095277 1021653 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 13:23:08.095286 1021653 kubeadm.go:318] 
	I1018 13:23:08.095372 1021653 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 23zmbo.xfn57qdj44jyl3mt \
	I1018 13:23:08.095496 1021653 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 13:23:08.095510 1021653 cni.go:84] Creating CNI manager for ""
	I1018 13:23:08.095518 1021653 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:23:08.100579 1021653 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 13:23:08.103564 1021653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 13:23:08.108011 1021653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 13:23:08.108036 1021653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 13:23:08.123228 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 13:23:08.422872 1021653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 13:23:08.423012 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:08.423098 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-779884 minikube.k8s.io/updated_at=2025_10_18T13_23_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=no-preload-779884 minikube.k8s.io/primary=true
	I1018 13:23:08.566103 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:08.566186 1021653 ops.go:34] apiserver oom_adj: -16
	I1018 13:23:09.066761 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:09.566242 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:10.066214 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:10.566455 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:11.066681 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:11.567074 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:12.066973 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:12.566283 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:13.066533 1021653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:23:13.187432 1021653 kubeadm.go:1113] duration metric: took 4.764470529s to wait for elevateKubeSystemPrivileges
	I1018 13:23:13.187467 1021653 kubeadm.go:402] duration metric: took 23.754773557s to StartCluster
	I1018 13:23:13.187485 1021653 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:23:13.187550 1021653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:23:13.188648 1021653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:23:13.188878 1021653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 13:23:13.188909 1021653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:23:13.189134 1021653 config.go:182] Loaded profile config "no-preload-779884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:23:13.189174 1021653 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:23:13.189236 1021653 addons.go:69] Setting storage-provisioner=true in profile "no-preload-779884"
	I1018 13:23:13.189249 1021653 addons.go:238] Setting addon storage-provisioner=true in "no-preload-779884"
	I1018 13:23:13.189271 1021653 host.go:66] Checking if "no-preload-779884" exists ...
	I1018 13:23:13.189707 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:23:13.190223 1021653 addons.go:69] Setting default-storageclass=true in profile "no-preload-779884"
	I1018 13:23:13.190251 1021653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-779884"
	I1018 13:23:13.190518 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:23:13.192100 1021653 out.go:179] * Verifying Kubernetes components...
	I1018 13:23:13.195949 1021653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:23:13.229159 1021653 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:23:13.232375 1021653 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:23:13.232398 1021653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:23:13.232464 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:23:13.237883 1021653 addons.go:238] Setting addon default-storageclass=true in "no-preload-779884"
	I1018 13:23:13.237941 1021653 host.go:66] Checking if "no-preload-779884" exists ...
	I1018 13:23:13.238448 1021653 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:23:13.261412 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:23:13.302889 1021653 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:23:13.302913 1021653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:23:13.302979 1021653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:23:13.333832 1021653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34167 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:23:13.446922 1021653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 13:23:13.540167 1021653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:23:13.648858 1021653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:23:13.669098 1021653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:23:13.955492 1021653 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 13:23:13.957649 1021653 node_ready.go:35] waiting up to 6m0s for node "no-preload-779884" to be "Ready" ...
	I1018 13:23:14.430701 1021653 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 13:23:14.433676 1021653 addons.go:514] duration metric: took 1.244475354s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 13:23:14.459694 1021653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-779884" context rescaled to 1 replicas
	W1018 13:23:15.960425 1021653 node_ready.go:57] node "no-preload-779884" has "Ready":"False" status (will retry)
	W1018 13:23:17.961172 1021653 node_ready.go:57] node "no-preload-779884" has "Ready":"False" status (will retry)
	W1018 13:23:20.460410 1021653 node_ready.go:57] node "no-preload-779884" has "Ready":"False" status (will retry)
	W1018 13:23:22.460528 1021653 node_ready.go:57] node "no-preload-779884" has "Ready":"False" status (will retry)
	W1018 13:23:24.460904 1021653 node_ready.go:57] node "no-preload-779884" has "Ready":"False" status (will retry)
	W1018 13:23:26.461126 1021653 node_ready.go:57] node "no-preload-779884" has "Ready":"False" status (will retry)
	I1018 13:23:27.466895 1021653 node_ready.go:49] node "no-preload-779884" is "Ready"
	I1018 13:23:27.466921 1021653 node_ready.go:38] duration metric: took 13.50924769s for node "no-preload-779884" to be "Ready" ...
	I1018 13:23:27.466934 1021653 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:23:27.466993 1021653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:23:27.501976 1021653 api_server.go:72] duration metric: took 14.313034501s to wait for apiserver process to appear ...
	I1018 13:23:27.501999 1021653 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:23:27.502020 1021653 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 13:23:27.524630 1021653 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 13:23:27.526469 1021653 api_server.go:141] control plane version: v1.34.1
	I1018 13:23:27.526501 1021653 api_server.go:131] duration metric: took 24.495005ms to wait for apiserver health ...
	I1018 13:23:27.526514 1021653 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:23:27.534297 1021653 system_pods.go:59] 8 kube-system pods found
	I1018 13:23:27.534407 1021653 system_pods.go:61] "coredns-66bc5c9577-fdgz7" [672e7011-6bf2-4d3f-96af-c75c979a5e5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:23:27.534442 1021653 system_pods.go:61] "etcd-no-preload-779884" [6abac989-2226-4fea-b6ee-3a40bdc71896] Running
	I1018 13:23:27.534485 1021653 system_pods.go:61] "kindnet-gc7k5" [22462756-3f13-454b-a9ea-e5658196e142] Running
	I1018 13:23:27.534520 1021653 system_pods.go:61] "kube-apiserver-no-preload-779884" [37258c00-8797-4109-86b6-8a45adcbc911] Running
	I1018 13:23:27.534556 1021653 system_pods.go:61] "kube-controller-manager-no-preload-779884" [f7f165f7-8fcd-4044-b247-6759796498dd] Running
	I1018 13:23:27.534593 1021653 system_pods.go:61] "kube-proxy-z6q26" [b74adbbe-e461-430c-a702-a957e5c4a4d1] Running
	I1018 13:23:27.534617 1021653 system_pods.go:61] "kube-scheduler-no-preload-779884" [4d83bc9c-2134-43c8-b319-4695005d435d] Running
	I1018 13:23:27.534644 1021653 system_pods.go:61] "storage-provisioner" [7d5b87af-be40-4b31-9c61-aa12d7e17f65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:23:27.534670 1021653 system_pods.go:74] duration metric: took 8.149376ms to wait for pod list to return data ...
	I1018 13:23:27.534708 1021653 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:23:27.538976 1021653 default_sa.go:45] found service account: "default"
	I1018 13:23:27.539057 1021653 default_sa.go:55] duration metric: took 4.315519ms for default service account to be created ...
	I1018 13:23:27.539103 1021653 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:23:27.542908 1021653 system_pods.go:86] 8 kube-system pods found
	I1018 13:23:27.542997 1021653 system_pods.go:89] "coredns-66bc5c9577-fdgz7" [672e7011-6bf2-4d3f-96af-c75c979a5e5b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:23:27.543019 1021653 system_pods.go:89] "etcd-no-preload-779884" [6abac989-2226-4fea-b6ee-3a40bdc71896] Running
	I1018 13:23:27.543065 1021653 system_pods.go:89] "kindnet-gc7k5" [22462756-3f13-454b-a9ea-e5658196e142] Running
	I1018 13:23:27.543092 1021653 system_pods.go:89] "kube-apiserver-no-preload-779884" [37258c00-8797-4109-86b6-8a45adcbc911] Running
	I1018 13:23:27.543115 1021653 system_pods.go:89] "kube-controller-manager-no-preload-779884" [f7f165f7-8fcd-4044-b247-6759796498dd] Running
	I1018 13:23:27.543149 1021653 system_pods.go:89] "kube-proxy-z6q26" [b74adbbe-e461-430c-a702-a957e5c4a4d1] Running
	I1018 13:23:27.543174 1021653 system_pods.go:89] "kube-scheduler-no-preload-779884" [4d83bc9c-2134-43c8-b319-4695005d435d] Running
	I1018 13:23:27.543197 1021653 system_pods.go:89] "storage-provisioner" [7d5b87af-be40-4b31-9c61-aa12d7e17f65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:23:27.543271 1021653 retry.go:31] will retry after 259.317701ms: missing components: kube-dns
	I1018 13:23:27.807039 1021653 system_pods.go:86] 8 kube-system pods found
	I1018 13:23:27.807075 1021653 system_pods.go:89] "coredns-66bc5c9577-fdgz7" [672e7011-6bf2-4d3f-96af-c75c979a5e5b] Running
	I1018 13:23:27.807082 1021653 system_pods.go:89] "etcd-no-preload-779884" [6abac989-2226-4fea-b6ee-3a40bdc71896] Running
	I1018 13:23:27.807087 1021653 system_pods.go:89] "kindnet-gc7k5" [22462756-3f13-454b-a9ea-e5658196e142] Running
	I1018 13:23:27.807091 1021653 system_pods.go:89] "kube-apiserver-no-preload-779884" [37258c00-8797-4109-86b6-8a45adcbc911] Running
	I1018 13:23:27.807096 1021653 system_pods.go:89] "kube-controller-manager-no-preload-779884" [f7f165f7-8fcd-4044-b247-6759796498dd] Running
	I1018 13:23:27.807108 1021653 system_pods.go:89] "kube-proxy-z6q26" [b74adbbe-e461-430c-a702-a957e5c4a4d1] Running
	I1018 13:23:27.807119 1021653 system_pods.go:89] "kube-scheduler-no-preload-779884" [4d83bc9c-2134-43c8-b319-4695005d435d] Running
	I1018 13:23:27.807123 1021653 system_pods.go:89] "storage-provisioner" [7d5b87af-be40-4b31-9c61-aa12d7e17f65] Running
	I1018 13:23:27.807132 1021653 system_pods.go:126] duration metric: took 267.99056ms to wait for k8s-apps to be running ...
	I1018 13:23:27.807145 1021653 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:23:27.807202 1021653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:23:27.822286 1021653 system_svc.go:56] duration metric: took 15.132349ms WaitForService to wait for kubelet
	I1018 13:23:27.822316 1021653 kubeadm.go:586] duration metric: took 14.633383035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:23:27.822337 1021653 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:23:27.825461 1021653 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:23:27.825499 1021653 node_conditions.go:123] node cpu capacity is 2
	I1018 13:23:27.825514 1021653 node_conditions.go:105] duration metric: took 3.170819ms to run NodePressure ...
	I1018 13:23:27.825527 1021653 start.go:241] waiting for startup goroutines ...
	I1018 13:23:27.825536 1021653 start.go:246] waiting for cluster config update ...
	I1018 13:23:27.825546 1021653 start.go:255] writing updated cluster config ...
	I1018 13:23:27.825853 1021653 ssh_runner.go:195] Run: rm -f paused
	I1018 13:23:27.830082 1021653 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:23:27.833875 1021653 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fdgz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:27.839316 1021653 pod_ready.go:94] pod "coredns-66bc5c9577-fdgz7" is "Ready"
	I1018 13:23:27.839347 1021653 pod_ready.go:86] duration metric: took 5.445138ms for pod "coredns-66bc5c9577-fdgz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:27.842151 1021653 pod_ready.go:83] waiting for pod "etcd-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:27.846972 1021653 pod_ready.go:94] pod "etcd-no-preload-779884" is "Ready"
	I1018 13:23:27.847000 1021653 pod_ready.go:86] duration metric: took 4.821485ms for pod "etcd-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:27.849464 1021653 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:27.854037 1021653 pod_ready.go:94] pod "kube-apiserver-no-preload-779884" is "Ready"
	I1018 13:23:27.854067 1021653 pod_ready.go:86] duration metric: took 4.572736ms for pod "kube-apiserver-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:27.856367 1021653 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:28.238301 1021653 pod_ready.go:94] pod "kube-controller-manager-no-preload-779884" is "Ready"
	I1018 13:23:28.238341 1021653 pod_ready.go:86] duration metric: took 381.912435ms for pod "kube-controller-manager-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:28.435047 1021653 pod_ready.go:83] waiting for pod "kube-proxy-z6q26" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:28.833899 1021653 pod_ready.go:94] pod "kube-proxy-z6q26" is "Ready"
	I1018 13:23:28.833926 1021653 pod_ready.go:86] duration metric: took 398.853294ms for pod "kube-proxy-z6q26" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:29.034135 1021653 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:29.435597 1021653 pod_ready.go:94] pod "kube-scheduler-no-preload-779884" is "Ready"
	I1018 13:23:29.435624 1021653 pod_ready.go:86] duration metric: took 401.463935ms for pod "kube-scheduler-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:23:29.435637 1021653 pod_ready.go:40] duration metric: took 1.60552283s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:23:29.492846 1021653 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:23:29.496197 1021653 out.go:179] * Done! kubectl is now configured to use "no-preload-779884" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 13:23:27 no-preload-779884 crio[839]: time="2025-10-18T13:23:27.440481191Z" level=info msg="Created container 619f6f8721b11d8589cdfa27a110407ac6f6e11f3658119f5533635622173c83: kube-system/coredns-66bc5c9577-fdgz7/coredns" id=29466b0c-2b53-44cd-b7db-9655c416ca90 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:23:27 no-preload-779884 crio[839]: time="2025-10-18T13:23:27.44159307Z" level=info msg="Starting container: 619f6f8721b11d8589cdfa27a110407ac6f6e11f3658119f5533635622173c83" id=ccb6fc76-200d-451b-8c7f-f7912929c1a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:23:27 no-preload-779884 crio[839]: time="2025-10-18T13:23:27.444992495Z" level=info msg="Started container" PID=2531 containerID=619f6f8721b11d8589cdfa27a110407ac6f6e11f3658119f5533635622173c83 description=kube-system/coredns-66bc5c9577-fdgz7/coredns id=ccb6fc76-200d-451b-8c7f-f7912929c1a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c10f902f4378b2af00fb74a3e3c9fd9eff99e341c322afadc00364a556d45a3a
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.082975971Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b5f8bb89-2017-42d2-ade2-933d49bbdc6b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.083085863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.090851983Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507 UID:4fe34383-2a51-4ea1-b880-6976f0c5dfbf NetNS:/var/run/netns/4d19b6e3-a80c-459d-b9be-761bcd4f49a2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026fc6d0}] Aliases:map[]}"
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.090896077Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.11455676Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507 UID:4fe34383-2a51-4ea1-b880-6976f0c5dfbf NetNS:/var/run/netns/4d19b6e3-a80c-459d-b9be-761bcd4f49a2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40026fc6d0}] Aliases:map[]}"
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.114735699Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.119556158Z" level=info msg="Ran pod sandbox 3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507 with infra container: default/busybox/POD" id=b5f8bb89-2017-42d2-ade2-933d49bbdc6b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.121004377Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=612a9e74-9c29-4626-a889-982f5c7f6e08 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.121156403Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=612a9e74-9c29-4626-a889-982f5c7f6e08 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.121196731Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=612a9e74-9c29-4626-a889-982f5c7f6e08 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.123493196Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=986954da-9641-49f0-86d8-83a239f31e17 name=/runtime.v1.ImageService/PullImage
	Oct 18 13:23:30 no-preload-779884 crio[839]: time="2025-10-18T13:23:30.127875825Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.210463472Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=986954da-9641-49f0-86d8-83a239f31e17 name=/runtime.v1.ImageService/PullImage
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.211640156Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=94f59318-4944-4443-bc13-3b01d966b6c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.213598418Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56c6f03e-b8ff-4273-a7da-6144573e59fc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.219291378Z" level=info msg="Creating container: default/busybox/busybox" id=2760c33e-c224-42bb-b021-2d303bb3ef70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.220149847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.226274876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.227250967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.249400227Z" level=info msg="Created container 058c57f5a04469be5effe639110c00b242840ddd25f2dd83a29f3d525715f2f5: default/busybox/busybox" id=2760c33e-c224-42bb-b021-2d303bb3ef70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.250526407Z" level=info msg="Starting container: 058c57f5a04469be5effe639110c00b242840ddd25f2dd83a29f3d525715f2f5" id=09147b76-12df-4908-afd6-885e4c05d3bc name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:23:32 no-preload-779884 crio[839]: time="2025-10-18T13:23:32.252706973Z" level=info msg="Started container" PID=2591 containerID=058c57f5a04469be5effe639110c00b242840ddd25f2dd83a29f3d525715f2f5 description=default/busybox/busybox id=09147b76-12df-4908-afd6-885e4c05d3bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	058c57f5a0446       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago       Running             busybox                   0                   3d70a5e9a8cf7       busybox                                     default
	619f6f8721b11       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 seconds ago      Running             coredns                   0                   c10f902f4378b       coredns-66bc5c9577-fdgz7                    kube-system
	ee6b496602c9d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      11 seconds ago      Running             storage-provisioner       0                   a2710f888430d       storage-provisioner                         kube-system
	aba88773aea52       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    22 seconds ago      Running             kindnet-cni               0                   f5a0792739b34       kindnet-gc7k5                               kube-system
	a810b6bd06a45       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      25 seconds ago      Running             kube-proxy                0                   19cb5c9008d53       kube-proxy-z6q26                            kube-system
	697b1085174ac       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      38 seconds ago      Running             etcd                      0                   719863e6fa810       etcd-no-preload-779884                      kube-system
	0409b73ae4ae1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      38 seconds ago      Running             kube-controller-manager   0                   8f067c98f6c37       kube-controller-manager-no-preload-779884   kube-system
	77c9e199c8924       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      38 seconds ago      Running             kube-scheduler            0                   39e7faf9b6a75       kube-scheduler-no-preload-779884            kube-system
	8c361e752b999       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      38 seconds ago      Running             kube-apiserver            0                   f05e695c55cef       kube-apiserver-no-preload-779884            kube-system
	
	
	==> coredns [619f6f8721b11d8589cdfa27a110407ac6f6e11f3658119f5533635622173c83] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48557 - 4921 "HINFO IN 6776228551274473193.7923931052321940969. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015037826s
	
	
	==> describe nodes <==
	Name:               no-preload-779884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-779884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=no-preload-779884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_23_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:23:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-779884
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:23:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:23:38 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:23:38 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:23:38 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:23:38 +0000   Sat, 18 Oct 2025 13:23:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-779884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                42ba3d0a-7b48-4d7d-a694-f3722a91765b
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-fdgz7                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-no-preload-779884                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-gc7k5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-779884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-779884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-z6q26                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-779884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node no-preload-779884 event: Registered Node no-preload-779884 in Controller
	  Normal   NodeReady                12s                kubelet          Node no-preload-779884 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 12:57] overlayfs: idmapped layers are currently not supported
	[Oct18 12:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [697b1085174ac7dd3c3c4ceb171c23501af69b9419049a6de3c171b1629800be] <==
	{"level":"warn","ts":"2025-10-18T13:23:03.456489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.480791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.536403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.539636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.584595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.626453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.656023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.673396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.706800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.724447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.769828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.790332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.817941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.865682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.891868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.918195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.951084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:03.976883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.019590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.037817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.063395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.108582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.136238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.139856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:23:04.238004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59110","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:23:39 up  5:06,  0 user,  load average: 1.99, 2.70, 2.33
	Linux no-preload-779884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aba88773aea5279602557116655851c76e8c2e5e3b9ae1d97e74c01981e4360e] <==
	I1018 13:23:16.413226       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:23:16.413630       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:23:16.413769       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:23:16.413780       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:23:16.413794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:23:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:23:16.641407       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:23:16.707713       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:23:16.707816       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:23:16.710782       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 13:23:17.008463       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:23:17.008488       1 metrics.go:72] Registering metrics
	I1018 13:23:17.008554       1 controller.go:711] "Syncing nftables rules"
	I1018 13:23:26.642459       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:23:26.642530       1 main.go:301] handling current node
	I1018 13:23:36.643884       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:23:36.643924       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8c361e752b99933cb610eecab112c500cb210802e9f3280a5254d59cfafa8fd0] <==
	E1018 13:23:05.041853       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 13:23:05.093681       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:23:05.103945       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 13:23:05.111857       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:23:05.133299       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:23:05.135359       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:23:05.196876       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:23:05.792265       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:23:05.797716       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:23:05.797742       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:23:06.567228       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:23:06.611597       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:23:06.708477       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:23:06.721025       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 13:23:06.722083       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:23:06.734943       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:23:06.913826       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:23:07.491915       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:23:07.526725       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:23:07.538580       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 13:23:12.416264       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 13:23:12.968173       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:23:13.025428       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:23:13.033374       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 13:23:37.854942       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36988: use of closed network connection
	
	
	==> kube-controller-manager [0409b73ae4ae1a21d544f3ba68754752d4d3cc1bb9c04ada55d61d7bc8871ac6] <==
	I1018 13:23:11.918433       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 13:23:11.921801       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-779884" podCIDRs=["10.244.0.0/24"]
	I1018 13:23:11.923114       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:23:11.925254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:23:11.939540       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:23:11.939570       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:23:11.939578       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:23:11.942906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:23:11.951444       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 13:23:11.963931       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:23:11.964045       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:23:11.964060       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 13:23:11.964569       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 13:23:11.964638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-779884"
	I1018 13:23:11.964677       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 13:23:11.964102       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:23:11.964072       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:23:11.964233       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 13:23:11.964247       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:23:11.964256       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:23:11.964219       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 13:23:11.964083       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:23:11.964091       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:23:11.971756       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:23:31.966836       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a810b6bd06a451e12bc63feb5f0ecc63f7d0069e0ff1cf00890319bc3087259b] <==
	I1018 13:23:13.641660       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:23:13.753271       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:23:13.853495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:23:13.853535       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 13:23:13.853612       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:23:13.906340       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:23:13.906390       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:23:13.925682       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:23:13.927050       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:23:13.927068       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:23:13.940543       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:23:13.940572       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:23:13.940597       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:23:13.940602       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:23:13.941204       1 config.go:309] "Starting node config controller"
	I1018 13:23:13.941214       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:23:13.941221       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:23:13.941343       1 config.go:200] "Starting service config controller"
	I1018 13:23:13.941349       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:23:14.042215       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:23:14.042262       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 13:23:14.062613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [77c9e199c8924c68733bd5af8f41cff8a66ce66bc07ea2477eb5b8391a931f33] <==
	E1018 13:23:04.957747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 13:23:04.957832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:23:04.957896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:23:04.957907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:23:04.957950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 13:23:04.962061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:23:04.962147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:23:04.962201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:23:04.962307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 13:23:04.962360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:23:04.962411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:23:04.962457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:23:04.962464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:23:05.787714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:23:05.795926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:23:05.842606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 13:23:05.869085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:23:05.892526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:23:05.945666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:23:05.946076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:23:05.980697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:23:06.058894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 13:23:06.065551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:23:06.198201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 13:23:08.945574       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: I1018 13:23:12.497641    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b74adbbe-e461-430c-a702-a957e5c4a4d1-xtables-lock\") pod \"kube-proxy-z6q26\" (UID: \"b74adbbe-e461-430c-a702-a957e5c4a4d1\") " pod="kube-system/kube-proxy-z6q26"
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: I1018 13:23:12.497661    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22462756-3f13-454b-a9ea-e5658196e142-lib-modules\") pod \"kindnet-gc7k5\" (UID: \"22462756-3f13-454b-a9ea-e5658196e142\") " pod="kube-system/kindnet-gc7k5"
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: I1018 13:23:12.497682    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22462756-3f13-454b-a9ea-e5658196e142-xtables-lock\") pod \"kindnet-gc7k5\" (UID: \"22462756-3f13-454b-a9ea-e5658196e142\") " pod="kube-system/kindnet-gc7k5"
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: I1018 13:23:12.497699    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd69t\" (UniqueName: \"kubernetes.io/projected/22462756-3f13-454b-a9ea-e5658196e142-kube-api-access-gd69t\") pod \"kindnet-gc7k5\" (UID: \"22462756-3f13-454b-a9ea-e5658196e142\") " pod="kube-system/kindnet-gc7k5"
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: E1018 13:23:12.616772    2045 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: E1018 13:23:12.616810    2045 projected.go:196] Error preparing data for projected volume kube-api-access-bphr9 for pod kube-system/kube-proxy-z6q26: configmap "kube-root-ca.crt" not found
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: E1018 13:23:12.616881    2045 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b74adbbe-e461-430c-a702-a957e5c4a4d1-kube-api-access-bphr9 podName:b74adbbe-e461-430c-a702-a957e5c4a4d1 nodeName:}" failed. No retries permitted until 2025-10-18 13:23:13.116856742 +0000 UTC m=+5.796678990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bphr9" (UniqueName: "kubernetes.io/projected/b74adbbe-e461-430c-a702-a957e5c4a4d1-kube-api-access-bphr9") pod "kube-proxy-z6q26" (UID: "b74adbbe-e461-430c-a702-a957e5c4a4d1") : configmap "kube-root-ca.crt" not found
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: E1018 13:23:12.618203    2045 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: E1018 13:23:12.618236    2045 projected.go:196] Error preparing data for projected volume kube-api-access-gd69t for pod kube-system/kindnet-gc7k5: configmap "kube-root-ca.crt" not found
	Oct 18 13:23:12 no-preload-779884 kubelet[2045]: E1018 13:23:12.618286    2045 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22462756-3f13-454b-a9ea-e5658196e142-kube-api-access-gd69t podName:22462756-3f13-454b-a9ea-e5658196e142 nodeName:}" failed. No retries permitted until 2025-10-18 13:23:13.118268193 +0000 UTC m=+5.798090441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gd69t" (UniqueName: "kubernetes.io/projected/22462756-3f13-454b-a9ea-e5658196e142-kube-api-access-gd69t") pod "kindnet-gc7k5" (UID: "22462756-3f13-454b-a9ea-e5658196e142") : configmap "kube-root-ca.crt" not found
	Oct 18 13:23:13 no-preload-779884 kubelet[2045]: I1018 13:23:13.214426    2045 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 13:23:13 no-preload-779884 kubelet[2045]: W1018 13:23:13.388042    2045 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-19cb5c9008d539e653bfd37ccfac5602a5c403d1bf433827478ed94fa14a872a WatchSource:0}: Error finding container 19cb5c9008d539e653bfd37ccfac5602a5c403d1bf433827478ed94fa14a872a: Status 404 returned error can't find the container with id 19cb5c9008d539e653bfd37ccfac5602a5c403d1bf433827478ed94fa14a872a
	Oct 18 13:23:13 no-preload-779884 kubelet[2045]: W1018 13:23:13.470933    2045 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-f5a0792739b3478d9cb57b50fcfa401956a2a22b425787427d4c786a7ef78604 WatchSource:0}: Error finding container f5a0792739b3478d9cb57b50fcfa401956a2a22b425787427d4c786a7ef78604: Status 404 returned error can't find the container with id f5a0792739b3478d9cb57b50fcfa401956a2a22b425787427d4c786a7ef78604
	Oct 18 13:23:16 no-preload-779884 kubelet[2045]: I1018 13:23:16.532697    2045 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z6q26" podStartSLOduration=4.532680054 podStartE2EDuration="4.532680054s" podCreationTimestamp="2025-10-18 13:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:23:13.610139863 +0000 UTC m=+6.289962127" watchObservedRunningTime="2025-10-18 13:23:16.532680054 +0000 UTC m=+9.212502302"
	Oct 18 13:23:16 no-preload-779884 kubelet[2045]: I1018 13:23:16.637287    2045 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gc7k5" podStartSLOduration=1.837149792 podStartE2EDuration="4.637267336s" podCreationTimestamp="2025-10-18 13:23:12 +0000 UTC" firstStartedPulling="2025-10-18 13:23:13.492707802 +0000 UTC m=+6.172530050" lastFinishedPulling="2025-10-18 13:23:16.292825329 +0000 UTC m=+8.972647594" observedRunningTime="2025-10-18 13:23:16.620379746 +0000 UTC m=+9.300202010" watchObservedRunningTime="2025-10-18 13:23:16.637267336 +0000 UTC m=+9.317089592"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.009236    2045 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.118139    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d5b87af-be40-4b31-9c61-aa12d7e17f65-tmp\") pod \"storage-provisioner\" (UID: \"7d5b87af-be40-4b31-9c61-aa12d7e17f65\") " pod="kube-system/storage-provisioner"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.118202    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rsd\" (UniqueName: \"kubernetes.io/projected/7d5b87af-be40-4b31-9c61-aa12d7e17f65-kube-api-access-h5rsd\") pod \"storage-provisioner\" (UID: \"7d5b87af-be40-4b31-9c61-aa12d7e17f65\") " pod="kube-system/storage-provisioner"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.118225    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m67wz\" (UniqueName: \"kubernetes.io/projected/672e7011-6bf2-4d3f-96af-c75c979a5e5b-kube-api-access-m67wz\") pod \"coredns-66bc5c9577-fdgz7\" (UID: \"672e7011-6bf2-4d3f-96af-c75c979a5e5b\") " pod="kube-system/coredns-66bc5c9577-fdgz7"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.118245    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/672e7011-6bf2-4d3f-96af-c75c979a5e5b-config-volume\") pod \"coredns-66bc5c9577-fdgz7\" (UID: \"672e7011-6bf2-4d3f-96af-c75c979a5e5b\") " pod="kube-system/coredns-66bc5c9577-fdgz7"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: W1018 13:23:27.389396    2045 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-c10f902f4378b2af00fb74a3e3c9fd9eff99e341c322afadc00364a556d45a3a WatchSource:0}: Error finding container c10f902f4378b2af00fb74a3e3c9fd9eff99e341c322afadc00364a556d45a3a: Status 404 returned error can't find the container with id c10f902f4378b2af00fb74a3e3c9fd9eff99e341c322afadc00364a556d45a3a
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.676118    2045 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fdgz7" podStartSLOduration=14.676076743 podStartE2EDuration="14.676076743s" podCreationTimestamp="2025-10-18 13:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:23:27.657665241 +0000 UTC m=+20.337487497" watchObservedRunningTime="2025-10-18 13:23:27.676076743 +0000 UTC m=+20.355899007"
	Oct 18 13:23:27 no-preload-779884 kubelet[2045]: I1018 13:23:27.694962    2045 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.694918083 podStartE2EDuration="13.694918083s" podCreationTimestamp="2025-10-18 13:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:23:27.676755773 +0000 UTC m=+20.356578029" watchObservedRunningTime="2025-10-18 13:23:27.694918083 +0000 UTC m=+20.374740339"
	Oct 18 13:23:29 no-preload-779884 kubelet[2045]: I1018 13:23:29.848742    2045 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc74m\" (UniqueName: \"kubernetes.io/projected/4fe34383-2a51-4ea1-b880-6976f0c5dfbf-kube-api-access-dc74m\") pod \"busybox\" (UID: \"4fe34383-2a51-4ea1-b880-6976f0c5dfbf\") " pod="default/busybox"
	Oct 18 13:23:30 no-preload-779884 kubelet[2045]: W1018 13:23:30.119288    2045 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507 WatchSource:0}: Error finding container 3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507: Status 404 returned error can't find the container with id 3d70a5e9a8cf765b8bf7a6bd541cd20ca0ff703f65967844d1a392364cfe0507
	
	
	==> storage-provisioner [ee6b496602c9dba5635b3d18b7d1b9fbbd9cb3b25b138f22211b0db922223583] <==
	I1018 13:23:27.480945       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:23:27.529584       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:23:27.529707       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:23:27.542056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:27.550084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:23:27.550335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:23:27.550619       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-779884_e7a4ec4f-4e21-4a0b-93b9-369057b12a71!
	I1018 13:23:27.550754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd1ede5c-2fc2-42b5-a458-71159756ac6f", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-779884_e7a4ec4f-4e21-4a0b-93b9-369057b12a71 became leader
	W1018 13:23:27.561756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:27.581449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:23:27.652190       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-779884_e7a4ec4f-4e21-4a0b-93b9-369057b12a71!
	W1018 13:23:29.595620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:29.612503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:31.615581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:31.619570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:33.622390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:33.629314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:35.632928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:35.637907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:37.646657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:37.662416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:39.666261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:23:39.673106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-779884 -n no-preload-779884
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-779884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-779884 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-779884 --alsologtostderr -v=1: exit status 80 (2.11550544s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-779884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:25:00.429254 1031581 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:25:00.429378 1031581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:25:00.429389 1031581 out.go:374] Setting ErrFile to fd 2...
	I1018 13:25:00.429398 1031581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:25:00.429687 1031581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:25:00.430049 1031581 out.go:368] Setting JSON to false
	I1018 13:25:00.430069 1031581 mustload.go:65] Loading cluster: no-preload-779884
	I1018 13:25:00.430504 1031581 config.go:182] Loaded profile config "no-preload-779884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:25:00.431116 1031581 cli_runner.go:164] Run: docker container inspect no-preload-779884 --format={{.State.Status}}
	I1018 13:25:00.460822 1031581 host.go:66] Checking if "no-preload-779884" exists ...
	I1018 13:25:00.461177 1031581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:25:00.577102 1031581 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 13:25:00.563032743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:25:00.577904 1031581 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-779884 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 13:25:00.581331 1031581 out.go:179] * Pausing node no-preload-779884 ... 
	I1018 13:25:00.584172 1031581 host.go:66] Checking if "no-preload-779884" exists ...
	I1018 13:25:00.584573 1031581 ssh_runner.go:195] Run: systemctl --version
	I1018 13:25:00.584641 1031581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-779884
	I1018 13:25:00.607194 1031581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34172 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/no-preload-779884/id_rsa Username:docker}
	I1018 13:25:00.728318 1031581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:25:00.753826 1031581 pause.go:52] kubelet running: true
	I1018 13:25:00.753947 1031581 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:25:01.115232 1031581 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:25:01.115353 1031581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:25:01.198471 1031581 cri.go:89] found id: "ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d"
	I1018 13:25:01.198510 1031581 cri.go:89] found id: "2acc93044628169ea6436041737d72874995cec0bf258b196d67674ce66e5b1a"
	I1018 13:25:01.198515 1031581 cri.go:89] found id: "92c12098612753238b3bbdae055f559ac0d4a79535b3b02cd6cb0eb6938f7daf"
	I1018 13:25:01.198519 1031581 cri.go:89] found id: "a7d6452329a4e7db4dab4d762e866f4e2b95ded5b24f3cba614f53534faacde7"
	I1018 13:25:01.198524 1031581 cri.go:89] found id: "b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375"
	I1018 13:25:01.198527 1031581 cri.go:89] found id: "16c6ce16d1fedc6c8abc8dcc8ec26540a3b027cb3aae542e5bb96bce20f62f4a"
	I1018 13:25:01.198531 1031581 cri.go:89] found id: "208bca4af3d9ac07c17e2bc79bac77257a4dc9124d606f9ab23f83508618bc86"
	I1018 13:25:01.198534 1031581 cri.go:89] found id: "1091417c452eb2cd93c4e416c602e6b8b1e09d9cd4a8210ef02cbdf618a5faba"
	I1018 13:25:01.198538 1031581 cri.go:89] found id: "efb92bf1d21e5e703b78994086ed6ac620b757e0d9ce9d0c24ad65c41901b598"
	I1018 13:25:01.198545 1031581 cri.go:89] found id: "a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	I1018 13:25:01.198548 1031581 cri.go:89] found id: "1f6ed76f33ec41de20d84a4d205fc3deae7b56485352e204bf54584af25765f4"
	I1018 13:25:01.198551 1031581 cri.go:89] found id: ""
	I1018 13:25:01.198645 1031581 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:25:01.216231 1031581 retry.go:31] will retry after 337.522723ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:25:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:25:01.554602 1031581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:25:01.568950 1031581 pause.go:52] kubelet running: false
	I1018 13:25:01.569046 1031581 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:25:01.758513 1031581 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:25:01.758688 1031581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:25:01.834237 1031581 cri.go:89] found id: "ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d"
	I1018 13:25:01.834272 1031581 cri.go:89] found id: "2acc93044628169ea6436041737d72874995cec0bf258b196d67674ce66e5b1a"
	I1018 13:25:01.834278 1031581 cri.go:89] found id: "92c12098612753238b3bbdae055f559ac0d4a79535b3b02cd6cb0eb6938f7daf"
	I1018 13:25:01.834282 1031581 cri.go:89] found id: "a7d6452329a4e7db4dab4d762e866f4e2b95ded5b24f3cba614f53534faacde7"
	I1018 13:25:01.834285 1031581 cri.go:89] found id: "b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375"
	I1018 13:25:01.834289 1031581 cri.go:89] found id: "16c6ce16d1fedc6c8abc8dcc8ec26540a3b027cb3aae542e5bb96bce20f62f4a"
	I1018 13:25:01.834311 1031581 cri.go:89] found id: "208bca4af3d9ac07c17e2bc79bac77257a4dc9124d606f9ab23f83508618bc86"
	I1018 13:25:01.834315 1031581 cri.go:89] found id: "1091417c452eb2cd93c4e416c602e6b8b1e09d9cd4a8210ef02cbdf618a5faba"
	I1018 13:25:01.834319 1031581 cri.go:89] found id: "efb92bf1d21e5e703b78994086ed6ac620b757e0d9ce9d0c24ad65c41901b598"
	I1018 13:25:01.834327 1031581 cri.go:89] found id: "a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	I1018 13:25:01.834351 1031581 cri.go:89] found id: "1f6ed76f33ec41de20d84a4d205fc3deae7b56485352e204bf54584af25765f4"
	I1018 13:25:01.834362 1031581 cri.go:89] found id: ""
	I1018 13:25:01.834436 1031581 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:25:01.846351 1031581 retry.go:31] will retry after 289.769414ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:25:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:25:02.136970 1031581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:25:02.153112 1031581 pause.go:52] kubelet running: false
	I1018 13:25:02.153236 1031581 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:25:02.327782 1031581 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:25:02.327963 1031581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:25:02.405928 1031581 cri.go:89] found id: "ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d"
	I1018 13:25:02.405964 1031581 cri.go:89] found id: "2acc93044628169ea6436041737d72874995cec0bf258b196d67674ce66e5b1a"
	I1018 13:25:02.405970 1031581 cri.go:89] found id: "92c12098612753238b3bbdae055f559ac0d4a79535b3b02cd6cb0eb6938f7daf"
	I1018 13:25:02.405974 1031581 cri.go:89] found id: "a7d6452329a4e7db4dab4d762e866f4e2b95ded5b24f3cba614f53534faacde7"
	I1018 13:25:02.405977 1031581 cri.go:89] found id: "b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375"
	I1018 13:25:02.405980 1031581 cri.go:89] found id: "16c6ce16d1fedc6c8abc8dcc8ec26540a3b027cb3aae542e5bb96bce20f62f4a"
	I1018 13:25:02.405984 1031581 cri.go:89] found id: "208bca4af3d9ac07c17e2bc79bac77257a4dc9124d606f9ab23f83508618bc86"
	I1018 13:25:02.405987 1031581 cri.go:89] found id: "1091417c452eb2cd93c4e416c602e6b8b1e09d9cd4a8210ef02cbdf618a5faba"
	I1018 13:25:02.405990 1031581 cri.go:89] found id: "efb92bf1d21e5e703b78994086ed6ac620b757e0d9ce9d0c24ad65c41901b598"
	I1018 13:25:02.405997 1031581 cri.go:89] found id: "a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	I1018 13:25:02.406005 1031581 cri.go:89] found id: "1f6ed76f33ec41de20d84a4d205fc3deae7b56485352e204bf54584af25765f4"
	I1018 13:25:02.406009 1031581 cri.go:89] found id: ""
	I1018 13:25:02.406084 1031581 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:25:02.421256 1031581 out.go:203] 
	W1018 13:25:02.424185 1031581 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:25:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:25:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 13:25:02.424210 1031581 out.go:285] * 
	* 
	W1018 13:25:02.431217 1031581 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 13:25:02.434228 1031581 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-779884 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-779884
helpers_test.go:243: (dbg) docker inspect no-preload-779884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45",
	        "Created": "2025-10-18T13:22:22.245395401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1026377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:23:52.774083852Z",
	            "FinishedAt": "2025-10-18T13:23:51.968755897Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/hostname",
	        "HostsPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/hosts",
	        "LogPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45-json.log",
	        "Name": "/no-preload-779884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-779884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-779884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45",
	                "LowerDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-779884",
	                "Source": "/var/lib/docker/volumes/no-preload-779884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-779884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-779884",
	                "name.minikube.sigs.k8s.io": "no-preload-779884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b29cd3feb3b3e51009619c48a29cb74086e807d40b0c1d7fd7fccbda66f7f2b7",
	            "SandboxKey": "/var/run/docker/netns/b29cd3feb3b3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34172"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34173"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34176"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34174"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34175"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-779884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:1e:b1:0a:87:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "939cb65a3289c015d5d4b8e7692a9fb9fd1181110d0a4789eecbc7983e7821f8",
	                    "EndpointID": "4e1b85d700d85419468417372feb7da171e627951d54eabb9b8ca95bd77d6b13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-779884",
	                        "78baa17fea0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884: exit status 2 (380.873118ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-779884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-779884 logs -n 25: (1.373784499s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-076887   │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p force-systemd-env-914730                                                                                                                                                                                                                   │ force-systemd-env-914730 │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ cert-options-179041 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ -p cert-options-179041 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	│ stop    │ -p old-k8s-version-460322 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │ 18 Oct 25 13:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-076887   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                                                                                     │ cert-expiration-076887   │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │                     │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:24:19
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:24:19.537771 1029063 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:24:19.538046 1029063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:24:19.538083 1029063 out.go:374] Setting ErrFile to fd 2...
	I1018 13:24:19.538105 1029063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:24:19.538456 1029063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:24:19.539041 1029063 out.go:368] Setting JSON to false
	I1018 13:24:19.540432 1029063 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18412,"bootTime":1760775448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:24:19.540562 1029063 start.go:141] virtualization:  
	I1018 13:24:19.544348 1029063 out.go:179] * [embed-certs-774829] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:24:19.548601 1029063 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:24:19.548681 1029063 notify.go:220] Checking for updates...
	I1018 13:24:19.556454 1029063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:24:19.559593 1029063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:24:19.563199 1029063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:24:19.566327 1029063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:24:19.569463 1029063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:24:19.573000 1029063 config.go:182] Loaded profile config "no-preload-779884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:24:19.573165 1029063 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:24:19.625633 1029063 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:24:19.625853 1029063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:24:19.739860 1029063 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:24:19.727622327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:24:19.739977 1029063 docker.go:318] overlay module found
	I1018 13:24:19.744158 1029063 out.go:179] * Using the docker driver based on user configuration
	I1018 13:24:19.747098 1029063 start.go:305] selected driver: docker
	I1018 13:24:19.747122 1029063 start.go:925] validating driver "docker" against <nil>
	I1018 13:24:19.747138 1029063 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:24:19.747969 1029063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:24:19.870414 1029063 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:24:19.858184088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:24:19.870566 1029063 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 13:24:19.870792 1029063 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:24:19.873892 1029063 out.go:179] * Using Docker driver with root privileges
	I1018 13:24:19.876848 1029063 cni.go:84] Creating CNI manager for ""
	I1018 13:24:19.876920 1029063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:24:19.876929 1029063 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:24:19.877012 1029063 start.go:349] cluster config:
	{Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:24:19.880295 1029063 out.go:179] * Starting "embed-certs-774829" primary control-plane node in "embed-certs-774829" cluster
	I1018 13:24:19.882823 1029063 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:24:19.886231 1029063 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:24:19.889268 1029063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:24:19.889303 1029063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:24:19.889326 1029063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:24:19.889353 1029063 cache.go:58] Caching tarball of preloaded images
	I1018 13:24:19.889441 1029063 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:24:19.889450 1029063 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:24:19.889578 1029063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json ...
	I1018 13:24:19.889596 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json: {Name:mkbd40880e9246c893533c4b7cafc7e61f9252f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:19.911079 1029063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:24:19.911097 1029063 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:24:19.911111 1029063 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:24:19.911145 1029063 start.go:360] acquireMachinesLock for embed-certs-774829: {Name:mk5aa8563d93509fb0e97633ae4ffa1630655c85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:24:19.911246 1029063 start.go:364] duration metric: took 79.41µs to acquireMachinesLock for "embed-certs-774829"
	I1018 13:24:19.911272 1029063 start.go:93] Provisioning new machine with config: &{Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:24:19.911341 1029063 start.go:125] createHost starting for "" (driver="docker")
	W1018 13:24:18.086391 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:20.588170 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:19.914965 1029063 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:24:19.915298 1029063 start.go:159] libmachine.API.Create for "embed-certs-774829" (driver="docker")
	I1018 13:24:19.915356 1029063 client.go:168] LocalClient.Create starting
	I1018 13:24:19.915455 1029063 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:24:19.915526 1029063 main.go:141] libmachine: Decoding PEM data...
	I1018 13:24:19.915567 1029063 main.go:141] libmachine: Parsing certificate...
	I1018 13:24:19.915755 1029063 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:24:19.915811 1029063 main.go:141] libmachine: Decoding PEM data...
	I1018 13:24:19.915842 1029063 main.go:141] libmachine: Parsing certificate...
	I1018 13:24:19.916339 1029063 cli_runner.go:164] Run: docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:24:19.946910 1029063 cli_runner.go:211] docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:24:19.946988 1029063 network_create.go:284] running [docker network inspect embed-certs-774829] to gather additional debugging logs...
	I1018 13:24:19.947018 1029063 cli_runner.go:164] Run: docker network inspect embed-certs-774829
	W1018 13:24:19.975081 1029063 cli_runner.go:211] docker network inspect embed-certs-774829 returned with exit code 1
	I1018 13:24:19.975107 1029063 network_create.go:287] error running [docker network inspect embed-certs-774829]: docker network inspect embed-certs-774829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-774829 not found
	I1018 13:24:19.975121 1029063 network_create.go:289] output of [docker network inspect embed-certs-774829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-774829 not found
	
	** /stderr **
	I1018 13:24:19.975216 1029063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:24:19.992497 1029063 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:24:19.992876 1029063 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:24:19.993109 1029063 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:24:19.993531 1029063 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001995f60}
	I1018 13:24:19.993549 1029063 network_create.go:124] attempt to create docker network embed-certs-774829 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 13:24:19.993609 1029063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-774829 embed-certs-774829
	I1018 13:24:20.073093 1029063 network_create.go:108] docker network embed-certs-774829 192.168.76.0/24 created
	I1018 13:24:20.073125 1029063 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-774829" container
	I1018 13:24:20.073206 1029063 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:24:20.091134 1029063 cli_runner.go:164] Run: docker volume create embed-certs-774829 --label name.minikube.sigs.k8s.io=embed-certs-774829 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:24:20.114618 1029063 oci.go:103] Successfully created a docker volume embed-certs-774829
	I1018 13:24:20.114704 1029063 cli_runner.go:164] Run: docker run --rm --name embed-certs-774829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-774829 --entrypoint /usr/bin/test -v embed-certs-774829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:24:20.887610 1029063 oci.go:107] Successfully prepared a docker volume embed-certs-774829
	I1018 13:24:20.887693 1029063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:24:20.887715 1029063 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:24:20.887787 1029063 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-774829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 13:24:23.084557 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:25.582000 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:26.708308 1029063 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-774829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.82048424s)
	I1018 13:24:26.708337 1029063 kic.go:203] duration metric: took 5.8206197s to extract preloaded images to volume ...
	W1018 13:24:26.708499 1029063 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:24:26.708619 1029063 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:24:26.770364 1029063 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-774829 --name embed-certs-774829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-774829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-774829 --network embed-certs-774829 --ip 192.168.76.2 --volume embed-certs-774829:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:24:27.104707 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Running}}
	I1018 13:24:27.124876 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:27.147257 1029063 cli_runner.go:164] Run: docker exec embed-certs-774829 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:24:27.204186 1029063 oci.go:144] the created container "embed-certs-774829" has a running status.
	I1018 13:24:27.204216 1029063 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa...
	I1018 13:24:29.093185 1029063 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:24:29.114188 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:29.136011 1029063 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:24:29.136037 1029063 kic_runner.go:114] Args: [docker exec --privileged embed-certs-774829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:24:29.207038 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:29.234262 1029063 machine.go:93] provisionDockerMachine start ...
	I1018 13:24:29.234358 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:29.261846 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:29.262181 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:29.262191 1029063 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:24:29.431529 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-774829
	
	I1018 13:24:29.431565 1029063 ubuntu.go:182] provisioning hostname "embed-certs-774829"
	I1018 13:24:29.431632 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:29.457002 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:29.457377 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:29.457393 1029063 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-774829 && echo "embed-certs-774829" | sudo tee /etc/hostname
	W1018 13:24:27.587128 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:30.086404 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:29.649152 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-774829
	
	I1018 13:24:29.649236 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:29.668011 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:29.668321 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:29.668345 1029063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-774829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-774829/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-774829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:24:29.815890 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:24:29.815987 1029063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:24:29.816045 1029063 ubuntu.go:190] setting up certificates
	I1018 13:24:29.816074 1029063 provision.go:84] configureAuth start
	I1018 13:24:29.816181 1029063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:24:29.834999 1029063 provision.go:143] copyHostCerts
	I1018 13:24:29.835065 1029063 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:24:29.835081 1029063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:24:29.835156 1029063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:24:29.835241 1029063 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:24:29.835246 1029063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:24:29.835270 1029063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:24:29.835317 1029063 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:24:29.835322 1029063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:24:29.835342 1029063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:24:29.835388 1029063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.embed-certs-774829 san=[127.0.0.1 192.168.76.2 embed-certs-774829 localhost minikube]
	I1018 13:24:30.526358 1029063 provision.go:177] copyRemoteCerts
	I1018 13:24:30.526430 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:24:30.526474 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:30.543911 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:30.648552 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:24:30.670878 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 13:24:30.693224 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:24:30.712300 1029063 provision.go:87] duration metric: took 896.197444ms to configureAuth
	I1018 13:24:30.712326 1029063 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:24:30.712520 1029063 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:24:30.712640 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:30.730294 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:30.730609 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:30.730630 1029063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:24:31.094543 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:24:31.094572 1029063 machine.go:96] duration metric: took 1.860286003s to provisionDockerMachine
	I1018 13:24:31.094583 1029063 client.go:171] duration metric: took 11.179208363s to LocalClient.Create
	I1018 13:24:31.094602 1029063 start.go:167] duration metric: took 11.17930785s to libmachine.API.Create "embed-certs-774829"
	I1018 13:24:31.094615 1029063 start.go:293] postStartSetup for "embed-certs-774829" (driver="docker")
	I1018 13:24:31.094626 1029063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:24:31.094699 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:24:31.094753 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.116210 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.224991 1029063 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:24:31.228852 1029063 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:24:31.228884 1029063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:24:31.228897 1029063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:24:31.229044 1029063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:24:31.229191 1029063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:24:31.229344 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:24:31.237593 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:24:31.258089 1029063 start.go:296] duration metric: took 163.4591ms for postStartSetup
	I1018 13:24:31.258486 1029063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:24:31.276390 1029063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json ...
	I1018 13:24:31.276676 1029063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:24:31.276729 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.293422 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.401682 1029063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:24:31.406784 1029063 start.go:128] duration metric: took 11.495428117s to createHost
	I1018 13:24:31.406821 1029063 start.go:83] releasing machines lock for "embed-certs-774829", held for 11.49556571s
	I1018 13:24:31.406921 1029063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:24:31.423929 1029063 ssh_runner.go:195] Run: cat /version.json
	I1018 13:24:31.424006 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.424087 1029063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:24:31.424149 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.448225 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.448612 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.551810 1029063 ssh_runner.go:195] Run: systemctl --version
	I1018 13:24:31.651308 1029063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:24:31.704521 1029063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:24:31.709791 1029063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:24:31.709862 1029063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:24:31.739981 1029063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:24:31.740002 1029063 start.go:495] detecting cgroup driver to use...
	I1018 13:24:31.740037 1029063 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:24:31.740092 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:24:31.759198 1029063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:24:31.772620 1029063 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:24:31.772685 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:24:31.791771 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:24:31.811536 1029063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:24:31.936217 1029063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:24:32.067193 1029063 docker.go:234] disabling docker service ...
	I1018 13:24:32.067317 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:24:32.097233 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:24:32.112544 1029063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:24:32.227126 1029063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:24:32.362775 1029063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:24:32.376695 1029063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:24:32.393611 1029063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:24:32.393675 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.404080 1029063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:24:32.404202 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.414863 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.424586 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.434184 1029063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:24:32.443853 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.453158 1029063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.467883 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.476882 1029063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:24:32.484543 1029063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:24:32.492343 1029063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:24:32.608245 1029063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:24:32.741728 1029063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:24:32.741843 1029063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:24:32.745976 1029063 start.go:563] Will wait 60s for crictl version
	I1018 13:24:32.746100 1029063 ssh_runner.go:195] Run: which crictl
	I1018 13:24:32.750630 1029063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:24:32.780290 1029063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:24:32.780446 1029063 ssh_runner.go:195] Run: crio --version
	I1018 13:24:32.813473 1029063 ssh_runner.go:195] Run: crio --version
	I1018 13:24:32.850549 1029063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:24:32.853478 1029063 cli_runner.go:164] Run: docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:24:32.871615 1029063 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 13:24:32.875840 1029063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:24:32.885740 1029063 kubeadm.go:883] updating cluster {Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:24:32.885850 1029063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:24:32.885907 1029063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:24:32.925877 1029063 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:24:32.925899 1029063 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:24:32.925956 1029063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:24:32.956374 1029063 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:24:32.956450 1029063 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:24:32.956475 1029063 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 13:24:32.956608 1029063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-774829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:24:32.956730 1029063 ssh_runner.go:195] Run: crio config
	I1018 13:24:33.025702 1029063 cni.go:84] Creating CNI manager for ""
	I1018 13:24:33.025777 1029063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:24:33.025802 1029063 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:24:33.025825 1029063 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-774829 NodeName:embed-certs-774829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:24:33.025955 1029063 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-774829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:24:33.026035 1029063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:24:33.034372 1029063 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:24:33.034466 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:24:33.042330 1029063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 13:24:33.056828 1029063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:24:33.070682 1029063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 13:24:33.087130 1029063 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:24:33.091108 1029063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:24:33.101522 1029063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:24:33.221662 1029063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:24:33.242346 1029063 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829 for IP: 192.168.76.2
	I1018 13:24:33.242368 1029063 certs.go:195] generating shared ca certs ...
	I1018 13:24:33.242386 1029063 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:33.242588 1029063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:24:33.242659 1029063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:24:33.242672 1029063 certs.go:257] generating profile certs ...
	I1018 13:24:33.242754 1029063 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key
	I1018 13:24:33.242774 1029063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.crt with IP's: []
	I1018 13:24:33.749862 1029063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.crt ...
	I1018 13:24:33.749896 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.crt: {Name:mk2cf11d98d4444b656532354b0ad79b03575cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:33.750128 1029063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key ...
	I1018 13:24:33.750148 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key: {Name:mkecd3f6cbbe1ca98793c069c19e67bbbfca1e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:33.750276 1029063 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f
	I1018 13:24:33.750298 1029063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 13:24:34.052338 1029063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f ...
	I1018 13:24:34.052371 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f: {Name:mkb02cb984db27304bc478ff8bb617ce55ed1072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.052611 1029063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f ...
	I1018 13:24:34.052631 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f: {Name:mkb31dcce150812aac8b5039a2b291917959b528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.052762 1029063 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt
	I1018 13:24:34.052854 1029063 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key
	I1018 13:24:34.052922 1029063 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key
	I1018 13:24:34.052940 1029063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt with IP's: []
	I1018 13:24:34.684284 1029063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt ...
	I1018 13:24:34.684318 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt: {Name:mk5584cd951d55af03be0d2a1675865c4ff64332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.684540 1029063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key ...
	I1018 13:24:34.684557 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key: {Name:mk0b31513714281203a7f5dea81ea3729fac281e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.684765 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:24:34.684813 1029063 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:24:34.684827 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:24:34.684854 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:24:34.684882 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:24:34.684911 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:24:34.684958 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:24:34.685537 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:24:34.704215 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:24:34.722387 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:24:34.740655 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:24:34.761491 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 13:24:34.780623 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:24:34.814883 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:24:34.835625 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 13:24:34.854093 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:24:34.872760 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:24:34.891684 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:24:34.910649 1029063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:24:34.924174 1029063 ssh_runner.go:195] Run: openssl version
	I1018 13:24:34.930941 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:24:34.939467 1029063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:24:34.943479 1029063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:24:34.943585 1029063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:24:34.985973 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:24:34.994839 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:24:35.006137 1029063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:24:35.012326 1029063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:24:35.012432 1029063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:24:35.058151 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:24:35.067011 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:24:35.075742 1029063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:24:35.080220 1029063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:24:35.080307 1029063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:24:35.123085 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:24:35.132177 1029063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:24:35.136160 1029063 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 13:24:35.136224 1029063 kubeadm.go:400] StartCluster: {Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:24:35.136297 1029063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:24:35.136363 1029063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:24:35.172677 1029063 cri.go:89] found id: ""
	I1018 13:24:35.172756 1029063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:24:35.180961 1029063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 13:24:35.189735 1029063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 13:24:35.189809 1029063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 13:24:35.198843 1029063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 13:24:35.198866 1029063 kubeadm.go:157] found existing configuration files:
	
	I1018 13:24:35.198941 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 13:24:35.209159 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 13:24:35.209281 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 13:24:35.220102 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 13:24:35.228275 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 13:24:35.228347 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 13:24:35.236765 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 13:24:35.245205 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 13:24:35.245274 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 13:24:35.253542 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 13:24:35.262683 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 13:24:35.262785 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 13:24:35.271086 1029063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 13:24:35.314701 1029063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 13:24:35.314792 1029063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 13:24:35.338486 1029063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 13:24:35.338565 1029063 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 13:24:35.338607 1029063 kubeadm.go:318] OS: Linux
	I1018 13:24:35.338659 1029063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 13:24:35.338714 1029063 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 13:24:35.338768 1029063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 13:24:35.338823 1029063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 13:24:35.338877 1029063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 13:24:35.338931 1029063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 13:24:35.338983 1029063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 13:24:35.339037 1029063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 13:24:35.339105 1029063 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 13:24:35.418928 1029063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 13:24:35.419158 1029063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 13:24:35.419273 1029063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 13:24:35.434563 1029063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 13:24:32.583087 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:34.584580 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:36.584705 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:35.439179 1029063 out.go:252]   - Generating certificates and keys ...
	I1018 13:24:35.439281 1029063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 13:24:35.439370 1029063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 13:24:35.760334 1029063 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 13:24:36.512541 1029063 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 13:24:37.605448 1029063 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 13:24:38.043362 1029063 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 13:24:38.746038 1029063 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 13:24:38.746186 1029063 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-774829 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 13:24:39.282274 1029063 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 13:24:39.282637 1029063 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-774829 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1018 13:24:38.584813 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:40.586066 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:39.562536 1029063 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 13:24:39.877389 1029063 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 13:24:40.452372 1029063 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 13:24:40.452729 1029063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 13:24:41.614855 1029063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 13:24:41.792727 1029063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 13:24:43.023173 1029063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 13:24:43.485311 1029063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 13:24:44.190961 1029063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 13:24:44.191712 1029063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 13:24:44.194438 1029063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 13:24:44.197818 1029063 out.go:252]   - Booting up control plane ...
	I1018 13:24:44.197936 1029063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 13:24:44.198033 1029063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 13:24:44.198114 1029063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 13:24:44.215876 1029063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 13:24:44.216465 1029063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 13:24:44.225681 1029063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 13:24:44.226315 1029063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 13:24:44.226519 1029063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 13:24:44.360425 1029063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 13:24:44.360552 1029063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1018 13:24:43.084027 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:45.090350 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:46.582334 1026245 pod_ready.go:94] pod "coredns-66bc5c9577-fdgz7" is "Ready"
	I1018 13:24:46.582358 1026245 pod_ready.go:86] duration metric: took 37.505290008s for pod "coredns-66bc5c9577-fdgz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.590479 1026245 pod_ready.go:83] waiting for pod "etcd-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.594808 1026245 pod_ready.go:94] pod "etcd-no-preload-779884" is "Ready"
	I1018 13:24:46.594877 1026245 pod_ready.go:86] duration metric: took 4.374408ms for pod "etcd-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.598132 1026245 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.606142 1026245 pod_ready.go:94] pod "kube-apiserver-no-preload-779884" is "Ready"
	I1018 13:24:46.606166 1026245 pod_ready.go:86] duration metric: took 8.011176ms for pod "kube-apiserver-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.611345 1026245 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.780991 1026245 pod_ready.go:94] pod "kube-controller-manager-no-preload-779884" is "Ready"
	I1018 13:24:46.781070 1026245 pod_ready.go:86] duration metric: took 169.703491ms for pod "kube-controller-manager-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.981010 1026245 pod_ready.go:83] waiting for pod "kube-proxy-z6q26" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.380339 1026245 pod_ready.go:94] pod "kube-proxy-z6q26" is "Ready"
	I1018 13:24:47.380362 1026245 pod_ready.go:86] duration metric: took 399.32918ms for pod "kube-proxy-z6q26" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.581019 1026245 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.980701 1026245 pod_ready.go:94] pod "kube-scheduler-no-preload-779884" is "Ready"
	I1018 13:24:47.980726 1026245 pod_ready.go:86] duration metric: took 399.683612ms for pod "kube-scheduler-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.980738 1026245 pod_ready.go:40] duration metric: took 38.912401326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:24:48.081964 1026245 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:24:48.085107 1026245 out.go:179] * Done! kubectl is now configured to use "no-preload-779884" cluster and "default" namespace by default
	I1018 13:24:45.860990 1029063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500855762s
	I1018 13:24:45.864601 1029063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 13:24:45.864699 1029063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 13:24:45.864792 1029063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 13:24:45.864873 1029063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 13:24:49.916911 1029063 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.051915213s
	I1018 13:24:52.099020 1029063 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.23430855s
	I1018 13:24:52.866245 1029063 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001515404s
	I1018 13:24:52.888085 1029063 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 13:24:52.901033 1029063 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 13:24:52.916286 1029063 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 13:24:52.916508 1029063 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-774829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 13:24:52.929110 1029063 kubeadm.go:318] [bootstrap-token] Using token: celdjk.odjq7panvfe244w0
	I1018 13:24:52.932093 1029063 out.go:252]   - Configuring RBAC rules ...
	I1018 13:24:52.932233 1029063 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 13:24:52.937236 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 13:24:52.952039 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 13:24:52.956483 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 13:24:52.960910 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 13:24:52.973104 1029063 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 13:24:53.273873 1029063 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 13:24:53.726108 1029063 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 13:24:54.273008 1029063 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 13:24:54.274257 1029063 kubeadm.go:318] 
	I1018 13:24:54.274365 1029063 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 13:24:54.274383 1029063 kubeadm.go:318] 
	I1018 13:24:54.274464 1029063 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 13:24:54.274473 1029063 kubeadm.go:318] 
	I1018 13:24:54.274499 1029063 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 13:24:54.274575 1029063 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 13:24:54.274631 1029063 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 13:24:54.274639 1029063 kubeadm.go:318] 
	I1018 13:24:54.274695 1029063 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 13:24:54.274704 1029063 kubeadm.go:318] 
	I1018 13:24:54.274753 1029063 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 13:24:54.274763 1029063 kubeadm.go:318] 
	I1018 13:24:54.274818 1029063 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 13:24:54.274902 1029063 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 13:24:54.274977 1029063 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 13:24:54.274986 1029063 kubeadm.go:318] 
	I1018 13:24:54.275074 1029063 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 13:24:54.275157 1029063 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 13:24:54.275168 1029063 kubeadm.go:318] 
	I1018 13:24:54.275256 1029063 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token celdjk.odjq7panvfe244w0 \
	I1018 13:24:54.275371 1029063 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 13:24:54.275395 1029063 kubeadm.go:318] 	--control-plane 
	I1018 13:24:54.275403 1029063 kubeadm.go:318] 
	I1018 13:24:54.275491 1029063 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 13:24:54.275500 1029063 kubeadm.go:318] 
	I1018 13:24:54.275585 1029063 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token celdjk.odjq7panvfe244w0 \
	I1018 13:24:54.275723 1029063 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 13:24:54.280769 1029063 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 13:24:54.281034 1029063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 13:24:54.281159 1029063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 13:24:54.281186 1029063 cni.go:84] Creating CNI manager for ""
	I1018 13:24:54.281199 1029063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:24:54.284429 1029063 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 13:24:54.287511 1029063 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 13:24:54.291843 1029063 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 13:24:54.291866 1029063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 13:24:54.308827 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 13:24:55.064196 1029063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 13:24:55.064340 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:55.064438 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-774829 minikube.k8s.io/updated_at=2025_10_18T13_24_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=embed-certs-774829 minikube.k8s.io/primary=true
	I1018 13:24:55.265024 1029063 ops.go:34] apiserver oom_adj: -16
	I1018 13:24:55.265205 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:55.765884 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:56.265777 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:56.765531 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:57.265298 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:57.765518 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:58.265617 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:58.765347 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:58.910386 1029063 kubeadm.go:1113] duration metric: took 3.846097952s to wait for elevateKubeSystemPrivileges
	I1018 13:24:58.910413 1029063 kubeadm.go:402] duration metric: took 23.774192142s to StartCluster
	I1018 13:24:58.910429 1029063 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:58.910490 1029063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:24:58.911867 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:58.912135 1029063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:24:58.912276 1029063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 13:24:58.912528 1029063 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:24:58.912560 1029063 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:24:58.912619 1029063 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-774829"
	I1018 13:24:58.912637 1029063 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-774829"
	I1018 13:24:58.912658 1029063 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:24:58.913436 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:58.913812 1029063 addons.go:69] Setting default-storageclass=true in profile "embed-certs-774829"
	I1018 13:24:58.913837 1029063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-774829"
	I1018 13:24:58.914126 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:58.916671 1029063 out.go:179] * Verifying Kubernetes components...
	I1018 13:24:58.921782 1029063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:24:58.954477 1029063 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:24:58.959220 1029063 addons.go:238] Setting addon default-storageclass=true in "embed-certs-774829"
	I1018 13:24:58.959267 1029063 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:24:58.960120 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:58.960424 1029063 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:24:58.960447 1029063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:24:58.960510 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:59.007543 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:59.008606 1029063 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:24:59.008625 1029063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:24:59.008695 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:59.030378 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:59.284672 1029063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:24:59.326231 1029063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:24:59.343925 1029063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 13:24:59.344118 1029063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:25:00.818416 1029063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.492100719s)
	I1018 13:25:00.818635 1029063 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.47447642s)
	I1018 13:25:00.819822 1029063 node_ready.go:35] waiting up to 6m0s for node "embed-certs-774829" to be "Ready" ...
	I1018 13:25:00.820093 1029063 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.476074191s)
	I1018 13:25:00.820110 1029063 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 13:25:00.823717 1029063 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Oct 18 13:24:34 no-preload-779884 crio[648]: time="2025-10-18T13:24:34.820751972Z" level=info msg="Removed container 405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv/dashboard-metrics-scraper" id=2f3847cb-41b4-48d6-b59b-8c1cdec5a10b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:24:37 no-preload-779884 conmon[1116]: conmon b3c500723387bb4da1aa <ninfo>: container 1118 exited with status 1
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.801358235Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=555144df-6d0c-4555-94de-56be7f3d47d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.802700598Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7cc73c68-cf15-41fe-985e-e4f14631e520 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.806105422Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5618c907-bc3c-4231-8039-1af8e45eacee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.806335668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816514043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816684047Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d5918ad2cd6c3fb3e4047f7257ac46eb5a01efe7d6139cb9dc1462deb1d2e432/merged/etc/passwd: no such file or directory"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816708031Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d5918ad2cd6c3fb3e4047f7257ac46eb5a01efe7d6139cb9dc1462deb1d2e432/merged/etc/group: no such file or directory"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816950174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.852255372Z" level=info msg="Created container ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d: kube-system/storage-provisioner/storage-provisioner" id=5618c907-bc3c-4231-8039-1af8e45eacee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.853369704Z" level=info msg="Starting container: ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d" id=9c11297b-95ed-4ea9-bc3b-4508e3d7dfff name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.855045199Z" level=info msg="Started container" PID=1638 containerID=ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d description=kube-system/storage-provisioner/storage-provisioner id=9c11297b-95ed-4ea9-bc3b-4508e3d7dfff name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fcf111360fd822004a22d42f50cef3035822967ccc572288e41fc54839ceeb
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.4407289Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.448701364Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.448859338Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.448931323Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.457700848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.457855959Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.457935271Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.46408741Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.464249143Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.464562155Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.468492563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.468633733Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ec4143fe6f433       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           25 seconds ago       Running             storage-provisioner         2                   67fcf111360fd       storage-provisioner                          kube-system
	a7a1e2a74e7c5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago       Exited              dashboard-metrics-scraper   2                   f107a37951190       dashboard-metrics-scraper-6ffb444bf9-dlmvv   kubernetes-dashboard
	1f6ed76f33ec4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   bf02e60ba6a94       kubernetes-dashboard-855c9754f9-qspqp        kubernetes-dashboard
	2acc930446281       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   4ffb56d4d25c6       coredns-66bc5c9577-fdgz7                     kube-system
	92c1209861275       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   4bebe59b30a05       kube-proxy-z6q26                             kube-system
	de4939a14bc85       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   3a195ae56d2ae       busybox                                      default
	a7d6452329a4e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   4c410b18c6d89       kindnet-gc7k5                                kube-system
	b3c500723387b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           56 seconds ago       Exited              storage-provisioner         1                   67fcf111360fd       storage-provisioner                          kube-system
	16c6ce16d1fed       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   7c2ed59e94826       kube-apiserver-no-preload-779884             kube-system
	208bca4af3d9a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b29a2f404fa00       kube-controller-manager-no-preload-779884    kube-system
	1091417c452eb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2b68c90038e20       etcd-no-preload-779884                       kube-system
	efb92bf1d21e5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   4247852c4cd93       kube-scheduler-no-preload-779884             kube-system
	
	
	==> coredns [2acc93044628169ea6436041737d72874995cec0bf258b196d67674ce66e5b1a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39004 - 38589 "HINFO IN 4416088844392900103.8845416520724328176. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034558294s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-779884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-779884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=no-preload-779884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_23_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:23:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-779884
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:24:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-779884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                42ba3d0a-7b48-4d7d-a694-f3722a91765b
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-fdgz7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-no-preload-779884                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-gc7k5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-779884              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-779884     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-z6q26                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-779884              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dlmvv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qspqp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           112s                 node-controller  Node no-preload-779884 event: Registered Node no-preload-779884 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-779884 status is now: NodeReady
	  Normal   Starting                 64s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)    kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)    kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)    kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-779884 event: Registered Node no-preload-779884 in Controller
	
	
	==> dmesg <==
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1091417c452eb2cd93c4e416c602e6b8b1e09d9cd4a8210ef02cbdf618a5faba] <==
	{"level":"warn","ts":"2025-10-18T13:24:03.603326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.636872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.686089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.762417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.801354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.853076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.879742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.913732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.935664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.973278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.990624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.996361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.022379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.052384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.074424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.113217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.136423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.153724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.195836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.199351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.237090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.324169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.367833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.392561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.486596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:25:03 up  5:07,  0 user,  load average: 3.17, 2.99, 2.46
	Linux no-preload-779884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a7d6452329a4e7db4dab4d762e866f4e2b95ded5b24f3cba614f53534faacde7] <==
	I1018 13:24:07.233608       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:24:07.315336       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:24:07.315762       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:24:07.315778       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:24:07.315794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:24:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:24:07.442295       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:24:07.442316       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:24:07.442324       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:24:07.442633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:24:37.439875       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:24:37.442315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:24:37.443593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:24:37.443742       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:24:39.042581       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:24:39.042621       1 metrics.go:72] Registering metrics
	I1018 13:24:39.042690       1 controller.go:711] "Syncing nftables rules"
	I1018 13:24:47.439780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:24:47.439827       1 main.go:301] handling current node
	I1018 13:24:57.439687       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:24:57.439721       1 main.go:301] handling current node
	
	
	==> kube-apiserver [16c6ce16d1fedc6c8abc8dcc8ec26540a3b027cb3aae542e5bb96bce20f62f4a] <==
	I1018 13:24:05.982792       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 13:24:05.983593       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 13:24:05.983603       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 13:24:05.988968       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:24:05.989320       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:24:05.989368       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:24:06.002410       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:24:06.012656       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:24:06.014454       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 13:24:06.014650       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:24:06.014670       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:24:06.014678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:24:06.014684       1 cache.go:39] Caches are synced for autoregister controller
	E1018 13:24:06.066526       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:24:06.294696       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:24:06.457290       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:24:07.768528       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:24:08.070615       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:24:08.177130       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:24:08.237772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:24:08.605929       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.95.219"}
	I1018 13:24:08.736546       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.52.116"}
	I1018 13:24:11.405142       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:24:11.603794       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:24:11.650498       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [208bca4af3d9ac07c17e2bc79bac77257a4dc9124d606f9ab23f83508618bc86] <==
	I1018 13:24:11.193405       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:24:11.199765       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:24:11.200130       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:24:11.200313       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:24:11.200387       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:24:11.200441       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:24:11.200475       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:24:11.200505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:24:11.207044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:24:11.207139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:24:11.207166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:24:11.207199       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:24:11.215734       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:24:11.216007       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:24:11.219892       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:24:11.219990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:24:11.220006       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:24:11.220015       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:24:11.221216       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 13:24:11.221286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:24:11.227470       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 13:24:11.227577       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:24:11.231689       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:24:11.241846       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:24:11.242839       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [92c12098612753238b3bbdae055f559ac0d4a79535b3b02cd6cb0eb6938f7daf] <==
	I1018 13:24:08.063131       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:24:08.551992       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:24:08.654459       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:24:08.654497       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 13:24:08.654570       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:24:08.815995       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:24:08.816067       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:24:08.822021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:24:08.822417       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:24:08.830254       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:24:08.831780       1 config.go:200] "Starting service config controller"
	I1018 13:24:08.831861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:24:08.832289       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:24:08.839570       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:24:08.839745       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:24:08.839787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:24:08.854807       1 config.go:309] "Starting node config controller"
	I1018 13:24:08.854900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:24:08.854931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:24:08.932488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:24:08.939849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 13:24:08.939893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [efb92bf1d21e5e703b78994086ed6ac620b757e0d9ce9d0c24ad65c41901b598] <==
	I1018 13:24:03.803823       1 serving.go:386] Generated self-signed cert in-memory
	W1018 13:24:05.963492       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:24:05.963518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:24:05.963528       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:24:05.963535       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:24:06.104099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:24:06.104130       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:24:06.108908       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:24:06.109433       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:24:06.109458       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:24:06.109630       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:24:06.214613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:24:11 no-preload-779884 kubelet[761]: I1018 13:24:11.834962     761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f23572d-7222-468a-ad61-4d569a419382-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-dlmvv\" (UID: \"1f23572d-7222-468a-ad61-4d569a419382\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv"
	Oct 18 13:24:12 no-preload-779884 kubelet[761]: W1018 13:24:12.176939     761 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-f107a37951190075b4741d4fc297c485e83c44b63f2d22f2096dfd84fd9c3e6f WatchSource:0}: Error finding container f107a37951190075b4741d4fc297c485e83c44b63f2d22f2096dfd84fd9c3e6f: Status 404 returned error can't find the container with id f107a37951190075b4741d4fc297c485e83c44b63f2d22f2096dfd84fd9c3e6f
	Oct 18 13:24:12 no-preload-779884 kubelet[761]: W1018 13:24:12.222445     761 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-bf02e60ba6a94fda88adb2886ffc37203b9f39db8286d28869252989184dd4ee WatchSource:0}: Error finding container bf02e60ba6a94fda88adb2886ffc37203b9f39db8286d28869252989184dd4ee: Status 404 returned error can't find the container with id bf02e60ba6a94fda88adb2886ffc37203b9f39db8286d28869252989184dd4ee
	Oct 18 13:24:16 no-preload-779884 kubelet[761]: I1018 13:24:16.189555     761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 13:24:19 no-preload-779884 kubelet[761]: I1018 13:24:19.742305     761 scope.go:117] "RemoveContainer" containerID="8f34619c36a56e9ba9955b7057053be58aa0df1e422488f82e354bad17059a8b"
	Oct 18 13:24:20 no-preload-779884 kubelet[761]: I1018 13:24:20.742652     761 scope.go:117] "RemoveContainer" containerID="8f34619c36a56e9ba9955b7057053be58aa0df1e422488f82e354bad17059a8b"
	Oct 18 13:24:20 no-preload-779884 kubelet[761]: I1018 13:24:20.743157     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:20 no-preload-779884 kubelet[761]: E1018 13:24:20.743764     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:21 no-preload-779884 kubelet[761]: I1018 13:24:21.746522     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:21 no-preload-779884 kubelet[761]: E1018 13:24:21.752010     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:22 no-preload-779884 kubelet[761]: I1018 13:24:22.750056     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:22 no-preload-779884 kubelet[761]: E1018 13:24:22.750226     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.381426     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.788965     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.789396     761 scope.go:117] "RemoveContainer" containerID="a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: E1018 13:24:34.789656     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.824621     761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qspqp" podStartSLOduration=9.973998345 podStartE2EDuration="23.8246037s" podCreationTimestamp="2025-10-18 13:24:11 +0000 UTC" firstStartedPulling="2025-10-18 13:24:12.22954613 +0000 UTC m=+13.006410816" lastFinishedPulling="2025-10-18 13:24:26.080151477 +0000 UTC m=+26.857016171" observedRunningTime="2025-10-18 13:24:26.784045835 +0000 UTC m=+27.560910546" watchObservedRunningTime="2025-10-18 13:24:34.8246037 +0000 UTC m=+35.601468411"
	Oct 18 13:24:37 no-preload-779884 kubelet[761]: I1018 13:24:37.800948     761 scope.go:117] "RemoveContainer" containerID="b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375"
	Oct 18 13:24:42 no-preload-779884 kubelet[761]: I1018 13:24:42.125895     761 scope.go:117] "RemoveContainer" containerID="a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	Oct 18 13:24:42 no-preload-779884 kubelet[761]: E1018 13:24:42.126107     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:54 no-preload-779884 kubelet[761]: I1018 13:24:54.381266     761 scope.go:117] "RemoveContainer" containerID="a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	Oct 18 13:24:54 no-preload-779884 kubelet[761]: E1018 13:24:54.381478     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:25:01 no-preload-779884 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:25:01 no-preload-779884 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:25:01 no-preload-779884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1f6ed76f33ec41de20d84a4d205fc3deae7b56485352e204bf54584af25765f4] <==
	2025/10/18 13:24:26 Using namespace: kubernetes-dashboard
	2025/10/18 13:24:26 Using in-cluster config to connect to apiserver
	2025/10/18 13:24:26 Using secret token for csrf signing
	2025/10/18 13:24:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:24:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:24:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 13:24:26 Generating JWE encryption key
	2025/10/18 13:24:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:24:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:24:26 Initializing JWE encryption key from synchronized object
	2025/10/18 13:24:26 Creating in-cluster Sidecar client
	2025/10/18 13:24:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:24:26 Serving insecurely on HTTP port: 9090
	2025/10/18 13:24:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:24:26 Starting overwatch
	
	
	==> storage-provisioner [b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375] <==
	I1018 13:24:07.360665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:24:37.426752       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d] <==
	I1018 13:24:37.869140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:24:37.890050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:24:37.890105       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:24:37.894730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:41.351169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:45.611494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:49.224729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:52.278512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:55.300205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:55.310219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:24:55.310375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:24:55.310534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-779884_29afe671-2ea7-425f-889e-16e766a03847!
	I1018 13:24:55.310988       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd1ede5c-2fc2-42b5-a458-71159756ac6f", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-779884_29afe671-2ea7-425f-889e-16e766a03847 became leader
	W1018 13:24:55.326535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:55.343895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:24:55.411551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-779884_29afe671-2ea7-425f-889e-16e766a03847!
	W1018 13:24:57.348166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:57.356356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:59.360735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:59.367992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:01.371237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:01.378171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:03.381170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:03.386421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-779884 -n no-preload-779884
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-779884 -n no-preload-779884: exit status 2 (399.544391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-779884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-779884
helpers_test.go:243: (dbg) docker inspect no-preload-779884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45",
	        "Created": "2025-10-18T13:22:22.245395401Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1026377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:23:52.774083852Z",
	            "FinishedAt": "2025-10-18T13:23:51.968755897Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/hostname",
	        "HostsPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/hosts",
	        "LogPath": "/var/lib/docker/containers/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45-json.log",
	        "Name": "/no-preload-779884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-779884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-779884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45",
	                "LowerDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf7cbda79a1214e9941643ce17a2c8c022ea209eb5af6649278549e348d49714/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-779884",
	                "Source": "/var/lib/docker/volumes/no-preload-779884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-779884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-779884",
	                "name.minikube.sigs.k8s.io": "no-preload-779884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b29cd3feb3b3e51009619c48a29cb74086e807d40b0c1d7fd7fccbda66f7f2b7",
	            "SandboxKey": "/var/run/docker/netns/b29cd3feb3b3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34172"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34173"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34176"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34174"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34175"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-779884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:1e:b1:0a:87:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "939cb65a3289c015d5d4b8e7692a9fb9fd1181110d0a4789eecbc7983e7821f8",
	                    "EndpointID": "4e1b85d700d85419468417372feb7da171e627951d54eabb9b8ca95bd77d6b13",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-779884",
	                        "78baa17fea0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884: exit status 2 (373.854878ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-779884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-779884 logs -n 25: (1.362344138s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-076887   │ jenkins │ v1.37.0 │ 18 Oct 25 13:18 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p force-systemd-env-914730                                                                                                                                                                                                                   │ force-systemd-env-914730 │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ cert-options-179041 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ ssh     │ -p cert-options-179041 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041      │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	│ stop    │ -p old-k8s-version-460322 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │ 18 Oct 25 13:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-076887   │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                                                                                     │ cert-expiration-076887   │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │                     │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884        │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:24:19
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:24:19.537771 1029063 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:24:19.538046 1029063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:24:19.538083 1029063 out.go:374] Setting ErrFile to fd 2...
	I1018 13:24:19.538105 1029063 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:24:19.538456 1029063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:24:19.539041 1029063 out.go:368] Setting JSON to false
	I1018 13:24:19.540432 1029063 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18412,"bootTime":1760775448,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:24:19.540562 1029063 start.go:141] virtualization:  
	I1018 13:24:19.544348 1029063 out.go:179] * [embed-certs-774829] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:24:19.548601 1029063 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:24:19.548681 1029063 notify.go:220] Checking for updates...
	I1018 13:24:19.556454 1029063 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:24:19.559593 1029063 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:24:19.563199 1029063 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:24:19.566327 1029063 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:24:19.569463 1029063 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:24:19.573000 1029063 config.go:182] Loaded profile config "no-preload-779884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:24:19.573165 1029063 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:24:19.625633 1029063 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:24:19.625853 1029063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:24:19.739860 1029063 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:24:19.727622327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:24:19.739977 1029063 docker.go:318] overlay module found
	I1018 13:24:19.744158 1029063 out.go:179] * Using the docker driver based on user configuration
	I1018 13:24:19.747098 1029063 start.go:305] selected driver: docker
	I1018 13:24:19.747122 1029063 start.go:925] validating driver "docker" against <nil>
	I1018 13:24:19.747138 1029063 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:24:19.747969 1029063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:24:19.870414 1029063 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:24:19.858184088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:24:19.870566 1029063 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 13:24:19.870792 1029063 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:24:19.873892 1029063 out.go:179] * Using Docker driver with root privileges
	I1018 13:24:19.876848 1029063 cni.go:84] Creating CNI manager for ""
	I1018 13:24:19.876920 1029063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:24:19.876929 1029063 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:24:19.877012 1029063 start.go:349] cluster config:
	{Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:24:19.880295 1029063 out.go:179] * Starting "embed-certs-774829" primary control-plane node in "embed-certs-774829" cluster
	I1018 13:24:19.882823 1029063 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:24:19.886231 1029063 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:24:19.889268 1029063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:24:19.889303 1029063 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:24:19.889326 1029063 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:24:19.889353 1029063 cache.go:58] Caching tarball of preloaded images
	I1018 13:24:19.889441 1029063 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:24:19.889450 1029063 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:24:19.889578 1029063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json ...
	I1018 13:24:19.889596 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json: {Name:mkbd40880e9246c893533c4b7cafc7e61f9252f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:19.911079 1029063 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:24:19.911097 1029063 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:24:19.911111 1029063 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:24:19.911145 1029063 start.go:360] acquireMachinesLock for embed-certs-774829: {Name:mk5aa8563d93509fb0e97633ae4ffa1630655c85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:24:19.911246 1029063 start.go:364] duration metric: took 79.41µs to acquireMachinesLock for "embed-certs-774829"
	I1018 13:24:19.911272 1029063 start.go:93] Provisioning new machine with config: &{Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:24:19.911341 1029063 start.go:125] createHost starting for "" (driver="docker")
	W1018 13:24:18.086391 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:20.588170 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:19.914965 1029063 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:24:19.915298 1029063 start.go:159] libmachine.API.Create for "embed-certs-774829" (driver="docker")
	I1018 13:24:19.915356 1029063 client.go:168] LocalClient.Create starting
	I1018 13:24:19.915455 1029063 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:24:19.915526 1029063 main.go:141] libmachine: Decoding PEM data...
	I1018 13:24:19.915567 1029063 main.go:141] libmachine: Parsing certificate...
	I1018 13:24:19.915755 1029063 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:24:19.915811 1029063 main.go:141] libmachine: Decoding PEM data...
	I1018 13:24:19.915842 1029063 main.go:141] libmachine: Parsing certificate...
	I1018 13:24:19.916339 1029063 cli_runner.go:164] Run: docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:24:19.946910 1029063 cli_runner.go:211] docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:24:19.946988 1029063 network_create.go:284] running [docker network inspect embed-certs-774829] to gather additional debugging logs...
	I1018 13:24:19.947018 1029063 cli_runner.go:164] Run: docker network inspect embed-certs-774829
	W1018 13:24:19.975081 1029063 cli_runner.go:211] docker network inspect embed-certs-774829 returned with exit code 1
	I1018 13:24:19.975107 1029063 network_create.go:287] error running [docker network inspect embed-certs-774829]: docker network inspect embed-certs-774829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-774829 not found
	I1018 13:24:19.975121 1029063 network_create.go:289] output of [docker network inspect embed-certs-774829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-774829 not found
	
	** /stderr **
	I1018 13:24:19.975216 1029063 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:24:19.992497 1029063 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:24:19.992876 1029063 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:24:19.993109 1029063 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:24:19.993531 1029063 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001995f60}
	I1018 13:24:19.993549 1029063 network_create.go:124] attempt to create docker network embed-certs-774829 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 13:24:19.993609 1029063 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-774829 embed-certs-774829
	I1018 13:24:20.073093 1029063 network_create.go:108] docker network embed-certs-774829 192.168.76.0/24 created
	I1018 13:24:20.073125 1029063 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-774829" container
	I1018 13:24:20.073206 1029063 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:24:20.091134 1029063 cli_runner.go:164] Run: docker volume create embed-certs-774829 --label name.minikube.sigs.k8s.io=embed-certs-774829 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:24:20.114618 1029063 oci.go:103] Successfully created a docker volume embed-certs-774829
	I1018 13:24:20.114704 1029063 cli_runner.go:164] Run: docker run --rm --name embed-certs-774829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-774829 --entrypoint /usr/bin/test -v embed-certs-774829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:24:20.887610 1029063 oci.go:107] Successfully prepared a docker volume embed-certs-774829
	I1018 13:24:20.887693 1029063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:24:20.887715 1029063 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:24:20.887787 1029063 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-774829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 13:24:23.084557 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:25.582000 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:26.708308 1029063 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-774829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.82048424s)
	I1018 13:24:26.708337 1029063 kic.go:203] duration metric: took 5.8206197s to extract preloaded images to volume ...
	W1018 13:24:26.708499 1029063 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:24:26.708619 1029063 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:24:26.770364 1029063 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-774829 --name embed-certs-774829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-774829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-774829 --network embed-certs-774829 --ip 192.168.76.2 --volume embed-certs-774829:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:24:27.104707 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Running}}
	I1018 13:24:27.124876 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:27.147257 1029063 cli_runner.go:164] Run: docker exec embed-certs-774829 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:24:27.204186 1029063 oci.go:144] the created container "embed-certs-774829" has a running status.
	I1018 13:24:27.204216 1029063 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa...
	I1018 13:24:29.093185 1029063 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:24:29.114188 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:29.136011 1029063 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:24:29.136037 1029063 kic_runner.go:114] Args: [docker exec --privileged embed-certs-774829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:24:29.207038 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:29.234262 1029063 machine.go:93] provisionDockerMachine start ...
	I1018 13:24:29.234358 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:29.261846 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:29.262181 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:29.262191 1029063 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:24:29.431529 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-774829
	
	I1018 13:24:29.431565 1029063 ubuntu.go:182] provisioning hostname "embed-certs-774829"
	I1018 13:24:29.431632 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:29.457002 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:29.457377 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:29.457393 1029063 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-774829 && echo "embed-certs-774829" | sudo tee /etc/hostname
	W1018 13:24:27.587128 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:30.086404 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:29.649152 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-774829
	
	I1018 13:24:29.649236 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:29.668011 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:29.668321 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:29.668345 1029063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-774829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-774829/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-774829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:24:29.815890 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:24:29.815987 1029063 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:24:29.816045 1029063 ubuntu.go:190] setting up certificates
	I1018 13:24:29.816074 1029063 provision.go:84] configureAuth start
	I1018 13:24:29.816181 1029063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:24:29.834999 1029063 provision.go:143] copyHostCerts
	I1018 13:24:29.835065 1029063 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:24:29.835081 1029063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:24:29.835156 1029063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:24:29.835241 1029063 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:24:29.835246 1029063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:24:29.835270 1029063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:24:29.835317 1029063 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:24:29.835322 1029063 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:24:29.835342 1029063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:24:29.835388 1029063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.embed-certs-774829 san=[127.0.0.1 192.168.76.2 embed-certs-774829 localhost minikube]
	I1018 13:24:30.526358 1029063 provision.go:177] copyRemoteCerts
	I1018 13:24:30.526430 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:24:30.526474 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:30.543911 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:30.648552 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:24:30.670878 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 13:24:30.693224 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:24:30.712300 1029063 provision.go:87] duration metric: took 896.197444ms to configureAuth
	I1018 13:24:30.712326 1029063 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:24:30.712520 1029063 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:24:30.712640 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:30.730294 1029063 main.go:141] libmachine: Using SSH client type: native
	I1018 13:24:30.730609 1029063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34177 <nil> <nil>}
	I1018 13:24:30.730630 1029063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:24:31.094543 1029063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:24:31.094572 1029063 machine.go:96] duration metric: took 1.860286003s to provisionDockerMachine
	I1018 13:24:31.094583 1029063 client.go:171] duration metric: took 11.179208363s to LocalClient.Create
	I1018 13:24:31.094602 1029063 start.go:167] duration metric: took 11.17930785s to libmachine.API.Create "embed-certs-774829"
	I1018 13:24:31.094615 1029063 start.go:293] postStartSetup for "embed-certs-774829" (driver="docker")
	I1018 13:24:31.094626 1029063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:24:31.094699 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:24:31.094753 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.116210 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.224991 1029063 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:24:31.228852 1029063 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:24:31.228884 1029063 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:24:31.228897 1029063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:24:31.229044 1029063 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:24:31.229191 1029063 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:24:31.229344 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:24:31.237593 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:24:31.258089 1029063 start.go:296] duration metric: took 163.4591ms for postStartSetup
	I1018 13:24:31.258486 1029063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:24:31.276390 1029063 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json ...
	I1018 13:24:31.276676 1029063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:24:31.276729 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.293422 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.401682 1029063 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:24:31.406784 1029063 start.go:128] duration metric: took 11.495428117s to createHost
	I1018 13:24:31.406821 1029063 start.go:83] releasing machines lock for "embed-certs-774829", held for 11.49556571s
	I1018 13:24:31.406921 1029063 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:24:31.423929 1029063 ssh_runner.go:195] Run: cat /version.json
	I1018 13:24:31.424006 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.424087 1029063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:24:31.424149 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:31.448225 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.448612 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:31.551810 1029063 ssh_runner.go:195] Run: systemctl --version
	I1018 13:24:31.651308 1029063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:24:31.704521 1029063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:24:31.709791 1029063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:24:31.709862 1029063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:24:31.739981 1029063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:24:31.740002 1029063 start.go:495] detecting cgroup driver to use...
	I1018 13:24:31.740037 1029063 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:24:31.740092 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:24:31.759198 1029063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:24:31.772620 1029063 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:24:31.772685 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:24:31.791771 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:24:31.811536 1029063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:24:31.936217 1029063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:24:32.067193 1029063 docker.go:234] disabling docker service ...
	I1018 13:24:32.067317 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:24:32.097233 1029063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:24:32.112544 1029063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:24:32.227126 1029063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:24:32.362775 1029063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:24:32.376695 1029063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:24:32.393611 1029063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:24:32.393675 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.404080 1029063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:24:32.404202 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.414863 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.424586 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.434184 1029063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:24:32.443853 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.453158 1029063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.467883 1029063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:24:32.476882 1029063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:24:32.484543 1029063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:24:32.492343 1029063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:24:32.608245 1029063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:24:32.741728 1029063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:24:32.741843 1029063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:24:32.745976 1029063 start.go:563] Will wait 60s for crictl version
	I1018 13:24:32.746100 1029063 ssh_runner.go:195] Run: which crictl
	I1018 13:24:32.750630 1029063 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:24:32.780290 1029063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:24:32.780446 1029063 ssh_runner.go:195] Run: crio --version
	I1018 13:24:32.813473 1029063 ssh_runner.go:195] Run: crio --version
	I1018 13:24:32.850549 1029063 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:24:32.853478 1029063 cli_runner.go:164] Run: docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:24:32.871615 1029063 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 13:24:32.875840 1029063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:24:32.885740 1029063 kubeadm.go:883] updating cluster {Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:24:32.885850 1029063 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:24:32.885907 1029063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:24:32.925877 1029063 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:24:32.925899 1029063 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:24:32.925956 1029063 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:24:32.956374 1029063 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:24:32.956450 1029063 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:24:32.956475 1029063 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 13:24:32.956608 1029063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-774829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:24:32.956730 1029063 ssh_runner.go:195] Run: crio config
	I1018 13:24:33.025702 1029063 cni.go:84] Creating CNI manager for ""
	I1018 13:24:33.025777 1029063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:24:33.025802 1029063 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:24:33.025825 1029063 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-774829 NodeName:embed-certs-774829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:24:33.025955 1029063 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-774829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:24:33.026035 1029063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:24:33.034372 1029063 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:24:33.034466 1029063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:24:33.042330 1029063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 13:24:33.056828 1029063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:24:33.070682 1029063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 13:24:33.087130 1029063 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:24:33.091108 1029063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:24:33.101522 1029063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:24:33.221662 1029063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:24:33.242346 1029063 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829 for IP: 192.168.76.2
	I1018 13:24:33.242368 1029063 certs.go:195] generating shared ca certs ...
	I1018 13:24:33.242386 1029063 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:33.242588 1029063 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:24:33.242659 1029063 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:24:33.242672 1029063 certs.go:257] generating profile certs ...
	I1018 13:24:33.242754 1029063 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key
	I1018 13:24:33.242774 1029063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.crt with IP's: []
	I1018 13:24:33.749862 1029063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.crt ...
	I1018 13:24:33.749896 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.crt: {Name:mk2cf11d98d4444b656532354b0ad79b03575cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:33.750128 1029063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key ...
	I1018 13:24:33.750148 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key: {Name:mkecd3f6cbbe1ca98793c069c19e67bbbfca1e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:33.750276 1029063 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f
	I1018 13:24:33.750298 1029063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 13:24:34.052338 1029063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f ...
	I1018 13:24:34.052371 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f: {Name:mkb02cb984db27304bc478ff8bb617ce55ed1072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.052611 1029063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f ...
	I1018 13:24:34.052631 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f: {Name:mkb31dcce150812aac8b5039a2b291917959b528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.052762 1029063 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt.971cb07f -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt
	I1018 13:24:34.052854 1029063 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key
	I1018 13:24:34.052922 1029063 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key
	I1018 13:24:34.052940 1029063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt with IP's: []
	I1018 13:24:34.684284 1029063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt ...
	I1018 13:24:34.684318 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt: {Name:mk5584cd951d55af03be0d2a1675865c4ff64332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.684540 1029063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key ...
	I1018 13:24:34.684557 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key: {Name:mk0b31513714281203a7f5dea81ea3729fac281e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:34.684765 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:24:34.684813 1029063 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:24:34.684827 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:24:34.684854 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:24:34.684882 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:24:34.684911 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:24:34.684958 1029063 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:24:34.685537 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:24:34.704215 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:24:34.722387 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:24:34.740655 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:24:34.761491 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 13:24:34.780623 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:24:34.814883 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:24:34.835625 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 13:24:34.854093 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:24:34.872760 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:24:34.891684 1029063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:24:34.910649 1029063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:24:34.924174 1029063 ssh_runner.go:195] Run: openssl version
	I1018 13:24:34.930941 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:24:34.939467 1029063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:24:34.943479 1029063 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:24:34.943585 1029063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:24:34.985973 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:24:34.994839 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:24:35.006137 1029063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:24:35.012326 1029063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:24:35.012432 1029063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:24:35.058151 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:24:35.067011 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:24:35.075742 1029063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:24:35.080220 1029063 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:24:35.080307 1029063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:24:35.123085 1029063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:24:35.132177 1029063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:24:35.136160 1029063 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 13:24:35.136224 1029063 kubeadm.go:400] StartCluster: {Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:24:35.136297 1029063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:24:35.136363 1029063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:24:35.172677 1029063 cri.go:89] found id: ""
	I1018 13:24:35.172756 1029063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:24:35.180961 1029063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 13:24:35.189735 1029063 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 13:24:35.189809 1029063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 13:24:35.198843 1029063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 13:24:35.198866 1029063 kubeadm.go:157] found existing configuration files:
	
	I1018 13:24:35.198941 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 13:24:35.209159 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 13:24:35.209281 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 13:24:35.220102 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 13:24:35.228275 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 13:24:35.228347 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 13:24:35.236765 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 13:24:35.245205 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 13:24:35.245274 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 13:24:35.253542 1029063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 13:24:35.262683 1029063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 13:24:35.262785 1029063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 13:24:35.271086 1029063 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 13:24:35.314701 1029063 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 13:24:35.314792 1029063 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 13:24:35.338486 1029063 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 13:24:35.338565 1029063 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 13:24:35.338607 1029063 kubeadm.go:318] OS: Linux
	I1018 13:24:35.338659 1029063 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 13:24:35.338714 1029063 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 13:24:35.338768 1029063 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 13:24:35.338823 1029063 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 13:24:35.338877 1029063 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 13:24:35.338931 1029063 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 13:24:35.338983 1029063 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 13:24:35.339037 1029063 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 13:24:35.339105 1029063 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 13:24:35.418928 1029063 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 13:24:35.419158 1029063 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 13:24:35.419273 1029063 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 13:24:35.434563 1029063 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 13:24:32.583087 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:34.584580 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:36.584705 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:35.439179 1029063 out.go:252]   - Generating certificates and keys ...
	I1018 13:24:35.439281 1029063 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 13:24:35.439370 1029063 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 13:24:35.760334 1029063 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 13:24:36.512541 1029063 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 13:24:37.605448 1029063 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 13:24:38.043362 1029063 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 13:24:38.746038 1029063 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 13:24:38.746186 1029063 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-774829 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 13:24:39.282274 1029063 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 13:24:39.282637 1029063 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-774829 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1018 13:24:38.584813 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:40.586066 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:39.562536 1029063 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 13:24:39.877389 1029063 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 13:24:40.452372 1029063 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 13:24:40.452729 1029063 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 13:24:41.614855 1029063 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 13:24:41.792727 1029063 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 13:24:43.023173 1029063 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 13:24:43.485311 1029063 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 13:24:44.190961 1029063 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 13:24:44.191712 1029063 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 13:24:44.194438 1029063 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 13:24:44.197818 1029063 out.go:252]   - Booting up control plane ...
	I1018 13:24:44.197936 1029063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 13:24:44.198033 1029063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 13:24:44.198114 1029063 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 13:24:44.215876 1029063 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 13:24:44.216465 1029063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 13:24:44.225681 1029063 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 13:24:44.226315 1029063 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 13:24:44.226519 1029063 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 13:24:44.360425 1029063 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 13:24:44.360552 1029063 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1018 13:24:43.084027 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	W1018 13:24:45.090350 1026245 pod_ready.go:104] pod "coredns-66bc5c9577-fdgz7" is not "Ready", error: <nil>
	I1018 13:24:46.582334 1026245 pod_ready.go:94] pod "coredns-66bc5c9577-fdgz7" is "Ready"
	I1018 13:24:46.582358 1026245 pod_ready.go:86] duration metric: took 37.505290008s for pod "coredns-66bc5c9577-fdgz7" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.590479 1026245 pod_ready.go:83] waiting for pod "etcd-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.594808 1026245 pod_ready.go:94] pod "etcd-no-preload-779884" is "Ready"
	I1018 13:24:46.594877 1026245 pod_ready.go:86] duration metric: took 4.374408ms for pod "etcd-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.598132 1026245 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.606142 1026245 pod_ready.go:94] pod "kube-apiserver-no-preload-779884" is "Ready"
	I1018 13:24:46.606166 1026245 pod_ready.go:86] duration metric: took 8.011176ms for pod "kube-apiserver-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.611345 1026245 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.780991 1026245 pod_ready.go:94] pod "kube-controller-manager-no-preload-779884" is "Ready"
	I1018 13:24:46.781070 1026245 pod_ready.go:86] duration metric: took 169.703491ms for pod "kube-controller-manager-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:46.981010 1026245 pod_ready.go:83] waiting for pod "kube-proxy-z6q26" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.380339 1026245 pod_ready.go:94] pod "kube-proxy-z6q26" is "Ready"
	I1018 13:24:47.380362 1026245 pod_ready.go:86] duration metric: took 399.32918ms for pod "kube-proxy-z6q26" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.581019 1026245 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.980701 1026245 pod_ready.go:94] pod "kube-scheduler-no-preload-779884" is "Ready"
	I1018 13:24:47.980726 1026245 pod_ready.go:86] duration metric: took 399.683612ms for pod "kube-scheduler-no-preload-779884" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:24:47.980738 1026245 pod_ready.go:40] duration metric: took 38.912401326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:24:48.081964 1026245 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:24:48.085107 1026245 out.go:179] * Done! kubectl is now configured to use "no-preload-779884" cluster and "default" namespace by default
	I1018 13:24:45.860990 1029063 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500855762s
	I1018 13:24:45.864601 1029063 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 13:24:45.864699 1029063 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 13:24:45.864792 1029063 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 13:24:45.864873 1029063 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 13:24:49.916911 1029063 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.051915213s
	I1018 13:24:52.099020 1029063 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.23430855s
	I1018 13:24:52.866245 1029063 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001515404s
	I1018 13:24:52.888085 1029063 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 13:24:52.901033 1029063 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 13:24:52.916286 1029063 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 13:24:52.916508 1029063 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-774829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 13:24:52.929110 1029063 kubeadm.go:318] [bootstrap-token] Using token: celdjk.odjq7panvfe244w0
	I1018 13:24:52.932093 1029063 out.go:252]   - Configuring RBAC rules ...
	I1018 13:24:52.932233 1029063 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 13:24:52.937236 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 13:24:52.952039 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 13:24:52.956483 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 13:24:52.960910 1029063 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 13:24:52.973104 1029063 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 13:24:53.273873 1029063 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 13:24:53.726108 1029063 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 13:24:54.273008 1029063 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 13:24:54.274257 1029063 kubeadm.go:318] 
	I1018 13:24:54.274365 1029063 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 13:24:54.274383 1029063 kubeadm.go:318] 
	I1018 13:24:54.274464 1029063 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 13:24:54.274473 1029063 kubeadm.go:318] 
	I1018 13:24:54.274499 1029063 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 13:24:54.274575 1029063 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 13:24:54.274631 1029063 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 13:24:54.274639 1029063 kubeadm.go:318] 
	I1018 13:24:54.274695 1029063 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 13:24:54.274704 1029063 kubeadm.go:318] 
	I1018 13:24:54.274753 1029063 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 13:24:54.274763 1029063 kubeadm.go:318] 
	I1018 13:24:54.274818 1029063 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 13:24:54.274902 1029063 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 13:24:54.274977 1029063 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 13:24:54.274986 1029063 kubeadm.go:318] 
	I1018 13:24:54.275074 1029063 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 13:24:54.275157 1029063 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 13:24:54.275168 1029063 kubeadm.go:318] 
	I1018 13:24:54.275256 1029063 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token celdjk.odjq7panvfe244w0 \
	I1018 13:24:54.275371 1029063 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 13:24:54.275395 1029063 kubeadm.go:318] 	--control-plane 
	I1018 13:24:54.275403 1029063 kubeadm.go:318] 
	I1018 13:24:54.275491 1029063 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 13:24:54.275500 1029063 kubeadm.go:318] 
	I1018 13:24:54.275585 1029063 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token celdjk.odjq7panvfe244w0 \
	I1018 13:24:54.275723 1029063 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 13:24:54.280769 1029063 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 13:24:54.281034 1029063 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 13:24:54.281159 1029063 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 13:24:54.281186 1029063 cni.go:84] Creating CNI manager for ""
	I1018 13:24:54.281199 1029063 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:24:54.284429 1029063 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 13:24:54.287511 1029063 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 13:24:54.291843 1029063 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 13:24:54.291866 1029063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 13:24:54.308827 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 13:24:55.064196 1029063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 13:24:55.064340 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:55.064438 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-774829 minikube.k8s.io/updated_at=2025_10_18T13_24_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=embed-certs-774829 minikube.k8s.io/primary=true
	I1018 13:24:55.265024 1029063 ops.go:34] apiserver oom_adj: -16
	I1018 13:24:55.265205 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:55.765884 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:56.265777 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:56.765531 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:57.265298 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:57.765518 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:58.265617 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:58.765347 1029063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:24:58.910386 1029063 kubeadm.go:1113] duration metric: took 3.846097952s to wait for elevateKubeSystemPrivileges
	I1018 13:24:58.910413 1029063 kubeadm.go:402] duration metric: took 23.774192142s to StartCluster
	I1018 13:24:58.910429 1029063 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:58.910490 1029063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:24:58.911867 1029063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:24:58.912135 1029063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:24:58.912276 1029063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 13:24:58.912528 1029063 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:24:58.912560 1029063 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:24:58.912619 1029063 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-774829"
	I1018 13:24:58.912637 1029063 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-774829"
	I1018 13:24:58.912658 1029063 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:24:58.913436 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:58.913812 1029063 addons.go:69] Setting default-storageclass=true in profile "embed-certs-774829"
	I1018 13:24:58.913837 1029063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-774829"
	I1018 13:24:58.914126 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:58.916671 1029063 out.go:179] * Verifying Kubernetes components...
	I1018 13:24:58.921782 1029063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:24:58.954477 1029063 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:24:58.959220 1029063 addons.go:238] Setting addon default-storageclass=true in "embed-certs-774829"
	I1018 13:24:58.959267 1029063 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:24:58.960120 1029063 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:24:58.960424 1029063 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:24:58.960447 1029063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:24:58.960510 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:59.007543 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:59.008606 1029063 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:24:59.008625 1029063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:24:59.008695 1029063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:24:59.030378 1029063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34177 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:24:59.284672 1029063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:24:59.326231 1029063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:24:59.343925 1029063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 13:24:59.344118 1029063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:25:00.818416 1029063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.492100719s)
	I1018 13:25:00.818635 1029063 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.47447642s)
	I1018 13:25:00.819822 1029063 node_ready.go:35] waiting up to 6m0s for node "embed-certs-774829" to be "Ready" ...
	I1018 13:25:00.820093 1029063 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.476074191s)
	I1018 13:25:00.820110 1029063 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 13:25:00.823717 1029063 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 13:25:00.825809 1029063 addons.go:514] duration metric: took 1.913228853s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1018 13:25:01.324589 1029063 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-774829" context rescaled to 1 replicas
	W1018 13:25:02.823345 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 13:24:34 no-preload-779884 crio[648]: time="2025-10-18T13:24:34.820751972Z" level=info msg="Removed container 405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv/dashboard-metrics-scraper" id=2f3847cb-41b4-48d6-b59b-8c1cdec5a10b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:24:37 no-preload-779884 conmon[1116]: conmon b3c500723387bb4da1aa <ninfo>: container 1118 exited with status 1
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.801358235Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=555144df-6d0c-4555-94de-56be7f3d47d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.802700598Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7cc73c68-cf15-41fe-985e-e4f14631e520 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.806105422Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5618c907-bc3c-4231-8039-1af8e45eacee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.806335668Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816514043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816684047Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d5918ad2cd6c3fb3e4047f7257ac46eb5a01efe7d6139cb9dc1462deb1d2e432/merged/etc/passwd: no such file or directory"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816708031Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d5918ad2cd6c3fb3e4047f7257ac46eb5a01efe7d6139cb9dc1462deb1d2e432/merged/etc/group: no such file or directory"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.816950174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.852255372Z" level=info msg="Created container ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d: kube-system/storage-provisioner/storage-provisioner" id=5618c907-bc3c-4231-8039-1af8e45eacee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.853369704Z" level=info msg="Starting container: ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d" id=9c11297b-95ed-4ea9-bc3b-4508e3d7dfff name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:24:37 no-preload-779884 crio[648]: time="2025-10-18T13:24:37.855045199Z" level=info msg="Started container" PID=1638 containerID=ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d description=kube-system/storage-provisioner/storage-provisioner id=9c11297b-95ed-4ea9-bc3b-4508e3d7dfff name=/runtime.v1.RuntimeService/StartContainer sandboxID=67fcf111360fd822004a22d42f50cef3035822967ccc572288e41fc54839ceeb
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.4407289Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.448701364Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.448859338Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.448931323Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.457700848Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.457855959Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.457935271Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.46408741Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.464249143Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.464562155Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.468492563Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:24:47 no-preload-779884 crio[648]: time="2025-10-18T13:24:47.468633733Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ec4143fe6f433       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           28 seconds ago       Running             storage-provisioner         2                   67fcf111360fd       storage-provisioner                          kube-system
	a7a1e2a74e7c5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   f107a37951190       dashboard-metrics-scraper-6ffb444bf9-dlmvv   kubernetes-dashboard
	1f6ed76f33ec4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   bf02e60ba6a94       kubernetes-dashboard-855c9754f9-qspqp        kubernetes-dashboard
	2acc930446281       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   4ffb56d4d25c6       coredns-66bc5c9577-fdgz7                     kube-system
	92c1209861275       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   4bebe59b30a05       kube-proxy-z6q26                             kube-system
	de4939a14bc85       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   3a195ae56d2ae       busybox                                      default
	a7d6452329a4e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   4c410b18c6d89       kindnet-gc7k5                                kube-system
	b3c500723387b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   67fcf111360fd       storage-provisioner                          kube-system
	16c6ce16d1fed       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   7c2ed59e94826       kube-apiserver-no-preload-779884             kube-system
	208bca4af3d9a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b29a2f404fa00       kube-controller-manager-no-preload-779884    kube-system
	1091417c452eb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2b68c90038e20       etcd-no-preload-779884                       kube-system
	efb92bf1d21e5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   4247852c4cd93       kube-scheduler-no-preload-779884             kube-system
	
	
	==> coredns [2acc93044628169ea6436041737d72874995cec0bf258b196d67674ce66e5b1a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39004 - 38589 "HINFO IN 4416088844392900103.8845416520724328176. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034558294s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-779884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-779884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=no-preload-779884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_23_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:23:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-779884
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:24:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:24:57 +0000   Sat, 18 Oct 2025 13:23:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-779884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                42ba3d0a-7b48-4d7d-a694-f3722a91765b
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-fdgz7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-779884                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-gc7k5                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-779884              250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-779884     200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-z6q26                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-779884              100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dlmvv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qspqp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 112s                 kube-proxy       
	  Normal   Starting                 57s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    119s                 kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  119s                 kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     119s                 kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-779884 event: Registered Node no-preload-779884 in Controller
	  Normal   NodeReady                99s                  kubelet          Node no-preload-779884 status is now: NodeReady
	  Normal   Starting                 67s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)    kubelet          Node no-preload-779884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)    kubelet          Node no-preload-779884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)    kubelet          Node no-preload-779884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                  node-controller  Node no-preload-779884 event: Registered Node no-preload-779884 in Controller
	
	
	==> dmesg <==
	[Oct18 12:59] overlayfs: idmapped layers are currently not supported
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1091417c452eb2cd93c4e416c602e6b8b1e09d9cd4a8210ef02cbdf618a5faba] <==
	{"level":"warn","ts":"2025-10-18T13:24:03.603326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.636872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.686089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.762417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.801354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.853076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.879742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.913732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.935664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.973278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.990624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:03.996361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.022379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.052384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.074424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.113217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.136423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.153724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.195836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.199351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.237090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.324169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.367833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.392561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:04.486596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:25:06 up  5:07,  0 user,  load average: 3.17, 2.99, 2.46
	Linux no-preload-779884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a7d6452329a4e7db4dab4d762e866f4e2b95ded5b24f3cba614f53534faacde7] <==
	I1018 13:24:07.233608       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:24:07.315336       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:24:07.315762       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:24:07.315778       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:24:07.315794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:24:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:24:07.442295       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:24:07.442316       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:24:07.442324       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:24:07.442633       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:24:37.439875       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:24:37.442315       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:24:37.443593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:24:37.443742       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:24:39.042581       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:24:39.042621       1 metrics.go:72] Registering metrics
	I1018 13:24:39.042690       1 controller.go:711] "Syncing nftables rules"
	I1018 13:24:47.439780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:24:47.439827       1 main.go:301] handling current node
	I1018 13:24:57.439687       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:24:57.439721       1 main.go:301] handling current node
	
	
	==> kube-apiserver [16c6ce16d1fedc6c8abc8dcc8ec26540a3b027cb3aae542e5bb96bce20f62f4a] <==
	I1018 13:24:05.982792       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 13:24:05.983593       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 13:24:05.983603       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 13:24:05.988968       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:24:05.989320       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:24:05.989368       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:24:06.002410       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:24:06.012656       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:24:06.014454       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 13:24:06.014650       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:24:06.014670       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:24:06.014678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:24:06.014684       1 cache.go:39] Caches are synced for autoregister controller
	E1018 13:24:06.066526       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:24:06.294696       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:24:06.457290       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:24:07.768528       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:24:08.070615       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:24:08.177130       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:24:08.237772       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:24:08.605929       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.95.219"}
	I1018 13:24:08.736546       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.52.116"}
	I1018 13:24:11.405142       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:24:11.603794       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:24:11.650498       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [208bca4af3d9ac07c17e2bc79bac77257a4dc9124d606f9ab23f83508618bc86] <==
	I1018 13:24:11.193405       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:24:11.199765       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:24:11.200130       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:24:11.200313       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:24:11.200387       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:24:11.200441       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:24:11.200475       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:24:11.200505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:24:11.207044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:24:11.207139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:24:11.207166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:24:11.207199       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:24:11.215734       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:24:11.216007       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:24:11.219892       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:24:11.219990       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:24:11.220006       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:24:11.220015       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:24:11.221216       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 13:24:11.221286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:24:11.227470       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 13:24:11.227577       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:24:11.231689       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:24:11.241846       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:24:11.242839       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [92c12098612753238b3bbdae055f559ac0d4a79535b3b02cd6cb0eb6938f7daf] <==
	I1018 13:24:08.063131       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:24:08.551992       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:24:08.654459       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:24:08.654497       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 13:24:08.654570       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:24:08.815995       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:24:08.816067       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:24:08.822021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:24:08.822417       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:24:08.830254       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:24:08.831780       1 config.go:200] "Starting service config controller"
	I1018 13:24:08.831861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:24:08.832289       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:24:08.839570       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:24:08.839745       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:24:08.839787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:24:08.854807       1 config.go:309] "Starting node config controller"
	I1018 13:24:08.854900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:24:08.854931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:24:08.932488       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:24:08.939849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 13:24:08.939893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [efb92bf1d21e5e703b78994086ed6ac620b757e0d9ce9d0c24ad65c41901b598] <==
	I1018 13:24:03.803823       1 serving.go:386] Generated self-signed cert in-memory
	W1018 13:24:05.963492       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:24:05.963518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:24:05.963528       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:24:05.963535       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:24:06.104099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:24:06.104130       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:24:06.108908       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:24:06.109433       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:24:06.109458       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:24:06.109630       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:24:06.214613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:24:11 no-preload-779884 kubelet[761]: I1018 13:24:11.834962     761 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1f23572d-7222-468a-ad61-4d569a419382-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-dlmvv\" (UID: \"1f23572d-7222-468a-ad61-4d569a419382\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv"
	Oct 18 13:24:12 no-preload-779884 kubelet[761]: W1018 13:24:12.176939     761 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-f107a37951190075b4741d4fc297c485e83c44b63f2d22f2096dfd84fd9c3e6f WatchSource:0}: Error finding container f107a37951190075b4741d4fc297c485e83c44b63f2d22f2096dfd84fd9c3e6f: Status 404 returned error can't find the container with id f107a37951190075b4741d4fc297c485e83c44b63f2d22f2096dfd84fd9c3e6f
	Oct 18 13:24:12 no-preload-779884 kubelet[761]: W1018 13:24:12.222445     761 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/78baa17fea0c5a32a47f0796f7371d2efe00599a93846a1b71505a9f034a2e45/crio-bf02e60ba6a94fda88adb2886ffc37203b9f39db8286d28869252989184dd4ee WatchSource:0}: Error finding container bf02e60ba6a94fda88adb2886ffc37203b9f39db8286d28869252989184dd4ee: Status 404 returned error can't find the container with id bf02e60ba6a94fda88adb2886ffc37203b9f39db8286d28869252989184dd4ee
	Oct 18 13:24:16 no-preload-779884 kubelet[761]: I1018 13:24:16.189555     761 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 13:24:19 no-preload-779884 kubelet[761]: I1018 13:24:19.742305     761 scope.go:117] "RemoveContainer" containerID="8f34619c36a56e9ba9955b7057053be58aa0df1e422488f82e354bad17059a8b"
	Oct 18 13:24:20 no-preload-779884 kubelet[761]: I1018 13:24:20.742652     761 scope.go:117] "RemoveContainer" containerID="8f34619c36a56e9ba9955b7057053be58aa0df1e422488f82e354bad17059a8b"
	Oct 18 13:24:20 no-preload-779884 kubelet[761]: I1018 13:24:20.743157     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:20 no-preload-779884 kubelet[761]: E1018 13:24:20.743764     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:21 no-preload-779884 kubelet[761]: I1018 13:24:21.746522     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:21 no-preload-779884 kubelet[761]: E1018 13:24:21.752010     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:22 no-preload-779884 kubelet[761]: I1018 13:24:22.750056     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:22 no-preload-779884 kubelet[761]: E1018 13:24:22.750226     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.381426     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.788965     761 scope.go:117] "RemoveContainer" containerID="405a7ae3e7d6520adaeae2d88d3d5b3d6ba015fdc8341e6ec84011e1e734275b"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.789396     761 scope.go:117] "RemoveContainer" containerID="a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: E1018 13:24:34.789656     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:34 no-preload-779884 kubelet[761]: I1018 13:24:34.824621     761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qspqp" podStartSLOduration=9.973998345 podStartE2EDuration="23.8246037s" podCreationTimestamp="2025-10-18 13:24:11 +0000 UTC" firstStartedPulling="2025-10-18 13:24:12.22954613 +0000 UTC m=+13.006410816" lastFinishedPulling="2025-10-18 13:24:26.080151477 +0000 UTC m=+26.857016171" observedRunningTime="2025-10-18 13:24:26.784045835 +0000 UTC m=+27.560910546" watchObservedRunningTime="2025-10-18 13:24:34.8246037 +0000 UTC m=+35.601468411"
	Oct 18 13:24:37 no-preload-779884 kubelet[761]: I1018 13:24:37.800948     761 scope.go:117] "RemoveContainer" containerID="b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375"
	Oct 18 13:24:42 no-preload-779884 kubelet[761]: I1018 13:24:42.125895     761 scope.go:117] "RemoveContainer" containerID="a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	Oct 18 13:24:42 no-preload-779884 kubelet[761]: E1018 13:24:42.126107     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:24:54 no-preload-779884 kubelet[761]: I1018 13:24:54.381266     761 scope.go:117] "RemoveContainer" containerID="a7a1e2a74e7c532c8e40ab51ba6ac2fab2d6d42eeea0d83a807f8c634c2ffeb0"
	Oct 18 13:24:54 no-preload-779884 kubelet[761]: E1018 13:24:54.381478     761 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dlmvv_kubernetes-dashboard(1f23572d-7222-468a-ad61-4d569a419382)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dlmvv" podUID="1f23572d-7222-468a-ad61-4d569a419382"
	Oct 18 13:25:01 no-preload-779884 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:25:01 no-preload-779884 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:25:01 no-preload-779884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1f6ed76f33ec41de20d84a4d205fc3deae7b56485352e204bf54584af25765f4] <==
	2025/10/18 13:24:26 Starting overwatch
	2025/10/18 13:24:26 Using namespace: kubernetes-dashboard
	2025/10/18 13:24:26 Using in-cluster config to connect to apiserver
	2025/10/18 13:24:26 Using secret token for csrf signing
	2025/10/18 13:24:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:24:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:24:26 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 13:24:26 Generating JWE encryption key
	2025/10/18 13:24:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:24:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:24:26 Initializing JWE encryption key from synchronized object
	2025/10/18 13:24:26 Creating in-cluster Sidecar client
	2025/10/18 13:24:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:24:26 Serving insecurely on HTTP port: 9090
	2025/10/18 13:24:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [b3c500723387bb4da1aa6ad2a497c1876e4fde0299065a39583b6fe2b5665375] <==
	I1018 13:24:07.360665       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:24:37.426752       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ec4143fe6f4330883807ec9c0b6ff928adce4e6e507d70144d72d5b08c1f973d] <==
	I1018 13:24:37.890050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:24:37.890105       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:24:37.894730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:41.351169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:45.611494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:49.224729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:52.278512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:55.300205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:55.310219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:24:55.310375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:24:55.310534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-779884_29afe671-2ea7-425f-889e-16e766a03847!
	I1018 13:24:55.310988       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cd1ede5c-2fc2-42b5-a458-71159756ac6f", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-779884_29afe671-2ea7-425f-889e-16e766a03847 became leader
	W1018 13:24:55.326535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:55.343895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:24:55.411551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-779884_29afe671-2ea7-425f-889e-16e766a03847!
	W1018 13:24:57.348166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:57.356356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:59.360735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:24:59.367992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:01.371237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:01.378171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:03.381170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:03.386421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:05.390654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:05.398572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-779884 -n no-preload-779884
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-779884 -n no-preload-779884: exit status 2 (376.340247ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-779884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.441472ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:25:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-774829 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-774829 describe deploy/metrics-server -n kube-system: exit status 1 (105.305268ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-774829 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-774829
helpers_test.go:243: (dbg) docker inspect embed-certs-774829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5",
	        "Created": "2025-10-18T13:24:26.79427098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1029499,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:24:26.864525728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/hosts",
	        "LogPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5-json.log",
	        "Name": "/embed-certs-774829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-774829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-774829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5",
	                "LowerDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-774829",
	                "Source": "/var/lib/docker/volumes/embed-certs-774829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-774829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-774829",
	                "name.minikube.sigs.k8s.io": "embed-certs-774829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6f0b648c6394c8f1926074a2022b9dbd21821fb7b69977863bd080e22714a2",
	            "SandboxKey": "/var/run/docker/netns/4f6f0b648c63",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34177"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34178"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34181"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34179"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34180"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-774829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:a7:d2:8f:be:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e311031c6dc9b74f7ff8e4ce1a369f0cc1a288a1b5c06ece89bfc9abebacd083",
	                    "EndpointID": "7ee8ad1d8bbb701d300ca18962e7bc034212f9d8f5c126161c191343022d559f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-774829",
	                        "43d79c77c4e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829
E1018 13:25:55.078232  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-774829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-774829 logs -n 25: (1.262863277s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-179041                                                                                                                                                                                                                        │ cert-options-179041          │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:19 UTC │ 18 Oct 25 13:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-460322 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │                     │
	│ stop    │ -p old-k8s-version-460322 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:20 UTC │ 18 Oct 25 13:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                                                                                     │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:25:10
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:25:10.900825 1033107 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:25:10.901050 1033107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:25:10.901082 1033107 out.go:374] Setting ErrFile to fd 2...
	I1018 13:25:10.901103 1033107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:25:10.901366 1033107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:25:10.901828 1033107 out.go:368] Setting JSON to false
	I1018 13:25:10.902798 1033107 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18463,"bootTime":1760775448,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:25:10.902892 1033107 start.go:141] virtualization:  
	I1018 13:25:10.905999 1033107 out.go:179] * [default-k8s-diff-port-208258] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:25:10.909036 1033107 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:25:10.909123 1033107 notify.go:220] Checking for updates...
	I1018 13:25:10.914241 1033107 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:25:10.916734 1033107 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:25:10.919442 1033107 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:25:10.922054 1033107 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:25:10.924683 1033107 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:25:10.927903 1033107 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:25:10.928016 1033107 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:25:10.957455 1033107 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:25:10.957579 1033107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:25:11.021078 1033107 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:25:11.0114627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:25:11.021188 1033107 docker.go:318] overlay module found
	I1018 13:25:11.024273 1033107 out.go:179] * Using the docker driver based on user configuration
	I1018 13:25:11.026992 1033107 start.go:305] selected driver: docker
	I1018 13:25:11.027018 1033107 start.go:925] validating driver "docker" against <nil>
	I1018 13:25:11.027032 1033107 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:25:11.027941 1033107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:25:11.088475 1033107 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:25:11.078967535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:25:11.088634 1033107 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 13:25:11.088867 1033107 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:25:11.091872 1033107 out.go:179] * Using Docker driver with root privileges
	I1018 13:25:11.094769 1033107 cni.go:84] Creating CNI manager for ""
	I1018 13:25:11.094927 1033107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:25:11.094944 1033107 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:25:11.095052 1033107 start.go:349] cluster config:
	{Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:25:11.098193 1033107 out.go:179] * Starting "default-k8s-diff-port-208258" primary control-plane node in "default-k8s-diff-port-208258" cluster
	I1018 13:25:11.101026 1033107 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:25:11.104025 1033107 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:25:11.106912 1033107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:25:11.106993 1033107 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:25:11.107011 1033107 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:25:11.107028 1033107 cache.go:58] Caching tarball of preloaded images
	I1018 13:25:11.107209 1033107 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:25:11.107221 1033107 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:25:11.107330 1033107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json ...
	I1018 13:25:11.107350 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json: {Name:mk5c2c984c1c61fee40f17dfb3680fc50e564557 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:11.128437 1033107 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:25:11.128462 1033107 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:25:11.128486 1033107 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:25:11.128517 1033107 start.go:360] acquireMachinesLock for default-k8s-diff-port-208258: {Name:mk1489085c407b0af704e7c70968afb6ecaa3acb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:25:11.128639 1033107 start.go:364] duration metric: took 100.317µs to acquireMachinesLock for "default-k8s-diff-port-208258"
	I1018 13:25:11.128677 1033107 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:25:11.128746 1033107 start.go:125] createHost starting for "" (driver="docker")
	W1018 13:25:09.823737 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:12.322967 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:14.323830 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	I1018 13:25:11.132377 1033107 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:25:11.132701 1033107 start.go:159] libmachine.API.Create for "default-k8s-diff-port-208258" (driver="docker")
	I1018 13:25:11.132747 1033107 client.go:168] LocalClient.Create starting
	I1018 13:25:11.132818 1033107 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:25:11.132850 1033107 main.go:141] libmachine: Decoding PEM data...
	I1018 13:25:11.132863 1033107 main.go:141] libmachine: Parsing certificate...
	I1018 13:25:11.132914 1033107 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:25:11.132931 1033107 main.go:141] libmachine: Decoding PEM data...
	I1018 13:25:11.132941 1033107 main.go:141] libmachine: Parsing certificate...
	I1018 13:25:11.133457 1033107 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-208258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:25:11.150532 1033107 cli_runner.go:211] docker network inspect default-k8s-diff-port-208258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:25:11.150616 1033107 network_create.go:284] running [docker network inspect default-k8s-diff-port-208258] to gather additional debugging logs...
	I1018 13:25:11.150633 1033107 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-208258
	W1018 13:25:11.167427 1033107 cli_runner.go:211] docker network inspect default-k8s-diff-port-208258 returned with exit code 1
	I1018 13:25:11.167459 1033107 network_create.go:287] error running [docker network inspect default-k8s-diff-port-208258]: docker network inspect default-k8s-diff-port-208258: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-208258 not found
	I1018 13:25:11.167477 1033107 network_create.go:289] output of [docker network inspect default-k8s-diff-port-208258]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-208258 not found
	
	** /stderr **
	I1018 13:25:11.167596 1033107 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:25:11.184832 1033107 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:25:11.185258 1033107 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:25:11.185508 1033107 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:25:11.185815 1033107 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e311031c6dc9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:16:60:d8:49:2a} reservation:<nil>}
	I1018 13:25:11.186231 1033107 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a48a20}
	I1018 13:25:11.186256 1033107 network_create.go:124] attempt to create docker network default-k8s-diff-port-208258 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 13:25:11.186313 1033107 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-208258 default-k8s-diff-port-208258
	I1018 13:25:11.251227 1033107 network_create.go:108] docker network default-k8s-diff-port-208258 192.168.85.0/24 created
	I1018 13:25:11.251264 1033107 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-208258" container
	I1018 13:25:11.251349 1033107 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:25:11.267783 1033107 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-208258 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-208258 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:25:11.285434 1033107 oci.go:103] Successfully created a docker volume default-k8s-diff-port-208258
	I1018 13:25:11.285594 1033107 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-208258-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-208258 --entrypoint /usr/bin/test -v default-k8s-diff-port-208258:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:25:11.844522 1033107 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-208258
	I1018 13:25:11.844602 1033107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:25:11.844626 1033107 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:25:11.844697 1033107 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-208258:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 13:25:16.324587 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:18.325029 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	I1018 13:25:16.313679 1033107 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-208258:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.468932272s)
	I1018 13:25:16.313713 1033107 kic.go:203] duration metric: took 4.469083593s to extract preloaded images to volume ...
	W1018 13:25:16.313853 1033107 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:25:16.313972 1033107 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:25:16.377040 1033107 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-208258 --name default-k8s-diff-port-208258 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-208258 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-208258 --network default-k8s-diff-port-208258 --ip 192.168.85.2 --volume default-k8s-diff-port-208258:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:25:16.741936 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Running}}
	I1018 13:25:16.765264 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:25:16.798179 1033107 cli_runner.go:164] Run: docker exec default-k8s-diff-port-208258 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:25:16.859381 1033107 oci.go:144] the created container "default-k8s-diff-port-208258" has a running status.
	I1018 13:25:16.859408 1033107 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa...
	I1018 13:25:17.440750 1033107 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:25:17.460604 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:25:17.476786 1033107 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:25:17.476811 1033107 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-208258 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:25:17.520487 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:25:17.538289 1033107 machine.go:93] provisionDockerMachine start ...
	I1018 13:25:17.538404 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:17.557758 1033107 main.go:141] libmachine: Using SSH client type: native
	I1018 13:25:17.558105 1033107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1018 13:25:17.558122 1033107 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:25:17.558767 1033107 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51338->127.0.0.1:34182: read: connection reset by peer
	I1018 13:25:20.707535 1033107 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-208258
	
	I1018 13:25:20.707561 1033107 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-208258"
	I1018 13:25:20.707643 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:20.729165 1033107 main.go:141] libmachine: Using SSH client type: native
	I1018 13:25:20.729506 1033107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1018 13:25:20.729524 1033107 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-208258 && echo "default-k8s-diff-port-208258" | sudo tee /etc/hostname
	I1018 13:25:20.890268 1033107 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-208258
	
	I1018 13:25:20.890381 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:20.915047 1033107 main.go:141] libmachine: Using SSH client type: native
	I1018 13:25:20.915362 1033107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1018 13:25:20.915395 1033107 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-208258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-208258/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-208258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:25:21.072231 1033107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:25:21.072257 1033107 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:25:21.072286 1033107 ubuntu.go:190] setting up certificates
	I1018 13:25:21.072302 1033107 provision.go:84] configureAuth start
	I1018 13:25:21.072368 1033107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:25:21.089743 1033107 provision.go:143] copyHostCerts
	I1018 13:25:21.089823 1033107 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:25:21.089840 1033107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:25:21.089920 1033107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:25:21.090022 1033107 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:25:21.090027 1033107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:25:21.090052 1033107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:25:21.090109 1033107 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:25:21.090114 1033107 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:25:21.090136 1033107 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:25:21.090191 1033107 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-208258 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-208258 localhost minikube]
	I1018 13:25:21.462853 1033107 provision.go:177] copyRemoteCerts
	I1018 13:25:21.462949 1033107 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:25:21.462999 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:21.481041 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:21.589795 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:25:21.610700 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 13:25:21.629932 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 13:25:21.648591 1033107 provision.go:87] duration metric: took 576.263862ms to configureAuth
	I1018 13:25:21.648631 1033107 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:25:21.648823 1033107 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:25:21.648941 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:21.666680 1033107 main.go:141] libmachine: Using SSH client type: native
	I1018 13:25:21.666980 1033107 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34182 <nil> <nil>}
	I1018 13:25:21.667003 1033107 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:25:21.989740 1033107 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:25:21.989762 1033107 machine.go:96] duration metric: took 4.451449032s to provisionDockerMachine
	I1018 13:25:21.989772 1033107 client.go:171] duration metric: took 10.857018719s to LocalClient.Create
	I1018 13:25:21.989785 1033107 start.go:167] duration metric: took 10.857087069s to libmachine.API.Create "default-k8s-diff-port-208258"
	I1018 13:25:21.989792 1033107 start.go:293] postStartSetup for "default-k8s-diff-port-208258" (driver="docker")
	I1018 13:25:21.989803 1033107 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:25:21.989886 1033107 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:25:21.989928 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:22.012136 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:22.120807 1033107 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:25:22.124457 1033107 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:25:22.124484 1033107 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:25:22.124496 1033107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:25:22.124555 1033107 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:25:22.124646 1033107 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:25:22.124749 1033107 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:25:22.133041 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:25:22.152516 1033107 start.go:296] duration metric: took 162.707624ms for postStartSetup
	I1018 13:25:22.152900 1033107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:25:22.170656 1033107 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json ...
	I1018 13:25:22.171002 1033107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:25:22.171068 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:22.193308 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:22.292698 1033107 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:25:22.297490 1033107 start.go:128] duration metric: took 11.168729619s to createHost
	I1018 13:25:22.297568 1033107 start.go:83] releasing machines lock for "default-k8s-diff-port-208258", held for 11.168915755s
	I1018 13:25:22.297678 1033107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:25:22.314790 1033107 ssh_runner.go:195] Run: cat /version.json
	I1018 13:25:22.314809 1033107 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:25:22.314847 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:22.314871 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:22.343710 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:22.345259 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:22.443350 1033107 ssh_runner.go:195] Run: systemctl --version
	I1018 13:25:22.532444 1033107 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:25:22.583483 1033107 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:25:22.588903 1033107 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:25:22.589032 1033107 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:25:22.619130 1033107 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:25:22.619150 1033107 start.go:495] detecting cgroup driver to use...
	I1018 13:25:22.619184 1033107 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:25:22.619237 1033107 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:25:22.638839 1033107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:25:22.651994 1033107 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:25:22.652106 1033107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:25:22.670971 1033107 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:25:22.691190 1033107 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:25:22.824733 1033107 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:25:22.958844 1033107 docker.go:234] disabling docker service ...
	I1018 13:25:22.958976 1033107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:25:22.982303 1033107 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:25:22.996989 1033107 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:25:23.122949 1033107 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:25:23.273783 1033107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:25:23.288756 1033107 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:25:23.302542 1033107 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:25:23.302614 1033107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.311636 1033107 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:25:23.311832 1033107 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.324196 1033107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.334422 1033107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.344037 1033107 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:25:23.352413 1033107 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.361118 1033107 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.374699 1033107 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:25:23.385207 1033107 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:25:23.393317 1033107 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:25:23.402984 1033107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:25:23.533768 1033107 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:25:23.692940 1033107 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:25:23.693100 1033107 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:25:23.697157 1033107 start.go:563] Will wait 60s for crictl version
	I1018 13:25:23.697274 1033107 ssh_runner.go:195] Run: which crictl
	I1018 13:25:23.705176 1033107 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:25:23.735443 1033107 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:25:23.735622 1033107 ssh_runner.go:195] Run: crio --version
	I1018 13:25:23.768372 1033107 ssh_runner.go:195] Run: crio --version
	I1018 13:25:23.810930 1033107 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 13:25:20.822458 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:22.822899 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	I1018 13:25:23.813640 1033107 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-208258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:25:23.833254 1033107 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:25:23.838532 1033107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:25:23.849036 1033107 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:25:23.849160 1033107 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:25:23.849217 1033107 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:25:23.886182 1033107 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:25:23.886207 1033107 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:25:23.886265 1033107 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:25:23.921038 1033107 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:25:23.921064 1033107 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:25:23.921071 1033107 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 13:25:23.921174 1033107 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-208258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:25:23.921263 1033107 ssh_runner.go:195] Run: crio config
	I1018 13:25:23.977457 1033107 cni.go:84] Creating CNI manager for ""
	I1018 13:25:23.977486 1033107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:25:23.977510 1033107 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:25:23.977568 1033107 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-208258 NodeName:default-k8s-diff-port-208258 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:25:23.977715 1033107 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-208258"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:25:23.977803 1033107 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:25:23.989245 1033107 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:25:23.989328 1033107 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:25:23.997346 1033107 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 13:25:24.014180 1033107 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:25:24.029204 1033107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 13:25:24.048396 1033107 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:25:24.052782 1033107 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:25:24.064714 1033107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:25:24.198920 1033107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:25:24.215954 1033107 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258 for IP: 192.168.85.2
	I1018 13:25:24.216020 1033107 certs.go:195] generating shared ca certs ...
	I1018 13:25:24.216063 1033107 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:24.216242 1033107 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:25:24.216337 1033107 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:25:24.216371 1033107 certs.go:257] generating profile certs ...
	I1018 13:25:24.216448 1033107 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.key
	I1018 13:25:24.216492 1033107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt with IP's: []
	I1018 13:25:24.530862 1033107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt ...
	I1018 13:25:24.530893 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: {Name:mkd49c1c147dfce81683a4ae9430f59b03fb070e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:24.531185 1033107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.key ...
	I1018 13:25:24.531203 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.key: {Name:mk20637ad1c34ea0ecb51a1d9853004e63d3027e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:24.531317 1033107 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key.b8a2e090
	I1018 13:25:24.531337 1033107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt.b8a2e090 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 13:25:26.496766 1033107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt.b8a2e090 ...
	I1018 13:25:26.496799 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt.b8a2e090: {Name:mkfb3ceed89de1bdc0005649c052562bd94a628b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:26.496994 1033107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key.b8a2e090 ...
	I1018 13:25:26.497009 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key.b8a2e090: {Name:mk2531bcfe0501e0066d8378b5324a2faf3bd496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:26.497101 1033107 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt.b8a2e090 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt
	I1018 13:25:26.497182 1033107 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key.b8a2e090 -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key
	I1018 13:25:26.497245 1033107 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key
	I1018 13:25:26.497264 1033107 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.crt with IP's: []
	I1018 13:25:26.951396 1033107 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.crt ...
	I1018 13:25:26.951430 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.crt: {Name:mk0f947a7404a88f60350e44df7ce54f89516d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:26.951610 1033107 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key ...
	I1018 13:25:26.951625 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key: {Name:mk9037480a408d3b1a7b11c0f7b8f83fb9de136d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:26.951840 1033107 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:25:26.951885 1033107 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:25:26.951899 1033107 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:25:26.951929 1033107 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:25:26.951957 1033107 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:25:26.951982 1033107 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:25:26.952046 1033107 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:25:26.952743 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:25:26.971746 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:25:26.990300 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:25:27.013805 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:25:27.033858 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 13:25:27.054139 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 13:25:27.073973 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:25:27.092752 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:25:27.112232 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:25:27.136589 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:25:27.156504 1033107 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:25:27.174665 1033107 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:25:27.188775 1033107 ssh_runner.go:195] Run: openssl version
	I1018 13:25:27.195092 1033107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:25:27.203619 1033107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:25:27.207900 1033107 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:25:27.207989 1033107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:25:27.249475 1033107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:25:27.258630 1033107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:25:27.266943 1033107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:25:27.270928 1033107 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:25:27.271027 1033107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:25:27.312194 1033107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:25:27.320827 1033107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:25:27.329724 1033107 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:25:27.333663 1033107 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:25:27.333759 1033107 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:25:27.377400 1033107 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:25:27.386067 1033107 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:25:27.389799 1033107 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 13:25:27.389898 1033107 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:25:27.389982 1033107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:25:27.390050 1033107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:25:27.420895 1033107 cri.go:89] found id: ""
	I1018 13:25:27.421036 1033107 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:25:27.430444 1033107 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 13:25:27.438697 1033107 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 13:25:27.438793 1033107 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 13:25:27.448598 1033107 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 13:25:27.448620 1033107 kubeadm.go:157] found existing configuration files:
	
	I1018 13:25:27.448727 1033107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 13:25:27.457344 1033107 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 13:25:27.457498 1033107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 13:25:27.466174 1033107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 13:25:27.474737 1033107 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 13:25:27.474840 1033107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 13:25:27.485655 1033107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 13:25:27.495449 1033107 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 13:25:27.495551 1033107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 13:25:27.503257 1033107 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 13:25:27.512011 1033107 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 13:25:27.512082 1033107 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 13:25:27.519604 1033107 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 13:25:27.569946 1033107 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 13:25:27.570012 1033107 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 13:25:27.594128 1033107 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 13:25:27.594209 1033107 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 13:25:27.594252 1033107 kubeadm.go:318] OS: Linux
	I1018 13:25:27.594317 1033107 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 13:25:27.594373 1033107 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 13:25:27.594432 1033107 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 13:25:27.594490 1033107 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 13:25:27.594544 1033107 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 13:25:27.594606 1033107 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 13:25:27.594661 1033107 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 13:25:27.594712 1033107 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 13:25:27.594764 1033107 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 13:25:27.670318 1033107 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 13:25:27.670436 1033107 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 13:25:27.670535 1033107 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 13:25:27.685656 1033107 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 13:25:24.824274 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:27.323450 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:29.325465 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	I1018 13:25:27.691514 1033107 out.go:252]   - Generating certificates and keys ...
	I1018 13:25:27.691626 1033107 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 13:25:27.691725 1033107 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 13:25:28.001695 1033107 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 13:25:28.590901 1033107 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 13:25:29.144896 1033107 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 13:25:30.162948 1033107 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 13:25:30.603444 1033107 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 13:25:30.603858 1033107 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-208258 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1018 13:25:31.824505 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:34.323543 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	I1018 13:25:31.935352 1033107 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 13:25:31.935548 1033107 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-208258 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 13:25:33.275915 1033107 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 13:25:33.669726 1033107 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 13:25:33.823556 1033107 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 13:25:33.823629 1033107 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 13:25:34.026235 1033107 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 13:25:34.124487 1033107 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 13:25:34.606493 1033107 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 13:25:34.693675 1033107 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 13:25:35.132471 1033107 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 13:25:35.133295 1033107 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 13:25:35.136343 1033107 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 13:25:35.139739 1033107 out.go:252]   - Booting up control plane ...
	I1018 13:25:35.139863 1033107 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 13:25:35.139948 1033107 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 13:25:35.140775 1033107 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 13:25:35.161058 1033107 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 13:25:35.161186 1033107 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 13:25:35.172597 1033107 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 13:25:35.173200 1033107 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 13:25:35.173261 1033107 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 13:25:35.327683 1033107 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 13:25:35.327813 1033107 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1018 13:25:36.823139 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	W1018 13:25:39.323795 1029063 node_ready.go:57] node "embed-certs-774829" has "Ready":"False" status (will retry)
	I1018 13:25:36.332046 1033107 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001589473s
	I1018 13:25:36.332498 1033107 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 13:25:36.332595 1033107 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1018 13:25:36.332720 1033107 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 13:25:36.332806 1033107 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 13:25:39.450833 1033107 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.117270855s
	I1018 13:25:43.286957 1033107 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.954306365s
	I1018 13:25:43.837624 1033107 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.50390267s
	I1018 13:25:43.858033 1033107 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 13:25:43.871621 1033107 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 13:25:43.886329 1033107 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 13:25:43.886582 1033107 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-208258 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 13:25:43.899226 1033107 kubeadm.go:318] [bootstrap-token] Using token: o2n1lw.tasfvaewbbel2qie
	I1018 13:25:40.340916 1029063 node_ready.go:49] node "embed-certs-774829" is "Ready"
	I1018 13:25:40.340953 1029063 node_ready.go:38] duration metric: took 39.521095208s for node "embed-certs-774829" to be "Ready" ...
	I1018 13:25:40.340968 1029063 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:25:40.341030 1029063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:25:40.365304 1029063 api_server.go:72] duration metric: took 41.453138614s to wait for apiserver process to appear ...
	I1018 13:25:40.365327 1029063 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:25:40.365348 1029063 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:25:40.402711 1029063 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:25:40.404089 1029063 api_server.go:141] control plane version: v1.34.1
	I1018 13:25:40.404114 1029063 api_server.go:131] duration metric: took 38.779357ms to wait for apiserver health ...
	I1018 13:25:40.404124 1029063 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:25:40.465831 1029063 system_pods.go:59] 8 kube-system pods found
	I1018 13:25:40.465868 1029063 system_pods.go:61] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:25:40.465875 1029063 system_pods.go:61] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:40.465881 1029063 system_pods.go:61] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:40.465886 1029063 system_pods.go:61] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:40.465891 1029063 system_pods.go:61] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:40.465895 1029063 system_pods.go:61] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:40.465899 1029063 system_pods.go:61] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:40.465904 1029063 system_pods.go:61] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Pending
	I1018 13:25:40.465910 1029063 system_pods.go:74] duration metric: took 61.779833ms to wait for pod list to return data ...
	I1018 13:25:40.465918 1029063 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:25:40.498354 1029063 default_sa.go:45] found service account: "default"
	I1018 13:25:40.498440 1029063 default_sa.go:55] duration metric: took 32.51524ms for default service account to be created ...
	I1018 13:25:40.498465 1029063 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:25:40.513872 1029063 system_pods.go:86] 8 kube-system pods found
	I1018 13:25:40.513901 1029063 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Pending
	I1018 13:25:40.513908 1029063 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:40.513913 1029063 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:40.513919 1029063 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:40.513923 1029063 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:40.513928 1029063 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:40.513932 1029063 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:40.513939 1029063 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:25:40.513959 1029063 retry.go:31] will retry after 225.34245ms: missing components: kube-dns
	I1018 13:25:40.743505 1029063 system_pods.go:86] 8 kube-system pods found
	I1018 13:25:40.743539 1029063 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:25:40.743547 1029063 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:40.743555 1029063 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:40.743560 1029063 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:40.743565 1029063 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:40.743568 1029063 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:40.743572 1029063 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:40.743578 1029063 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:25:40.743593 1029063 retry.go:31] will retry after 309.571465ms: missing components: kube-dns
	I1018 13:25:41.059671 1029063 system_pods.go:86] 8 kube-system pods found
	I1018 13:25:41.059702 1029063 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:25:41.059708 1029063 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:41.059715 1029063 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:41.059719 1029063 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:41.059724 1029063 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:41.059727 1029063 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:41.059731 1029063 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:41.059735 1029063 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Running
	I1018 13:25:41.059749 1029063 retry.go:31] will retry after 426.452058ms: missing components: kube-dns
	I1018 13:25:41.490213 1029063 system_pods.go:86] 8 kube-system pods found
	I1018 13:25:41.490254 1029063 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:25:41.490265 1029063 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:41.490272 1029063 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:41.490276 1029063 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:41.490281 1029063 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:41.490285 1029063 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:41.490289 1029063 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:41.490293 1029063 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Running
	I1018 13:25:41.490308 1029063 retry.go:31] will retry after 408.567211ms: missing components: kube-dns
	I1018 13:25:41.903025 1029063 system_pods.go:86] 8 kube-system pods found
	I1018 13:25:41.903057 1029063 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:25:41.903064 1029063 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:41.903070 1029063 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:41.903075 1029063 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:41.903082 1029063 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:41.903086 1029063 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:41.903090 1029063 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:41.903094 1029063 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Running
	I1018 13:25:41.903109 1029063 retry.go:31] will retry after 620.419786ms: missing components: kube-dns
	I1018 13:25:42.527304 1029063 system_pods.go:86] 8 kube-system pods found
	I1018 13:25:42.527334 1029063 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Running
	I1018 13:25:42.527340 1029063 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running
	I1018 13:25:42.527344 1029063 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:25:42.527349 1029063 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running
	I1018 13:25:42.527353 1029063 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running
	I1018 13:25:42.527356 1029063 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:25:42.527361 1029063 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running
	I1018 13:25:42.527365 1029063 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Running
	I1018 13:25:42.527373 1029063 system_pods.go:126] duration metric: took 2.028889119s to wait for k8s-apps to be running ...
	I1018 13:25:42.527381 1029063 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:25:42.527438 1029063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:25:42.558149 1029063 system_svc.go:56] duration metric: took 30.75755ms WaitForService to wait for kubelet
	I1018 13:25:42.558175 1029063 kubeadm.go:586] duration metric: took 43.646014374s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:25:42.558194 1029063 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:25:42.561691 1029063 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:25:42.561718 1029063 node_conditions.go:123] node cpu capacity is 2
	I1018 13:25:42.561731 1029063 node_conditions.go:105] duration metric: took 3.531497ms to run NodePressure ...
	I1018 13:25:42.561744 1029063 start.go:241] waiting for startup goroutines ...
	I1018 13:25:42.561751 1029063 start.go:246] waiting for cluster config update ...
	I1018 13:25:42.561762 1029063 start.go:255] writing updated cluster config ...
	I1018 13:25:42.562027 1029063 ssh_runner.go:195] Run: rm -f paused
	I1018 13:25:42.566131 1029063 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:25:42.569755 1029063 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch4qs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.574902 1029063 pod_ready.go:94] pod "coredns-66bc5c9577-ch4qs" is "Ready"
	I1018 13:25:42.574967 1029063 pod_ready.go:86] duration metric: took 5.18915ms for pod "coredns-66bc5c9577-ch4qs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.577656 1029063 pod_ready.go:83] waiting for pod "etcd-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.584326 1029063 pod_ready.go:94] pod "etcd-embed-certs-774829" is "Ready"
	I1018 13:25:42.584349 1029063 pod_ready.go:86] duration metric: took 6.624376ms for pod "etcd-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.586648 1029063 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.593944 1029063 pod_ready.go:94] pod "kube-apiserver-embed-certs-774829" is "Ready"
	I1018 13:25:42.593972 1029063 pod_ready.go:86] duration metric: took 7.298296ms for pod "kube-apiserver-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.596329 1029063 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:42.971183 1029063 pod_ready.go:94] pod "kube-controller-manager-embed-certs-774829" is "Ready"
	I1018 13:25:42.971263 1029063 pod_ready.go:86] duration metric: took 374.859737ms for pod "kube-controller-manager-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:43.171167 1029063 pod_ready.go:83] waiting for pod "kube-proxy-vqgcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:43.571319 1029063 pod_ready.go:94] pod "kube-proxy-vqgcc" is "Ready"
	I1018 13:25:43.571353 1029063 pod_ready.go:86] duration metric: took 400.162598ms for pod "kube-proxy-vqgcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:43.770603 1029063 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:44.170026 1029063 pod_ready.go:94] pod "kube-scheduler-embed-certs-774829" is "Ready"
	I1018 13:25:44.170059 1029063 pod_ready.go:86] duration metric: took 399.426546ms for pod "kube-scheduler-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:25:44.170071 1029063 pod_ready.go:40] duration metric: took 1.603911403s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:25:44.227218 1029063 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:25:44.232427 1029063 out.go:179] * Done! kubectl is now configured to use "embed-certs-774829" cluster and "default" namespace by default
	I1018 13:25:43.902222 1033107 out.go:252]   - Configuring RBAC rules ...
	I1018 13:25:43.902357 1033107 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 13:25:43.909572 1033107 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 13:25:43.919743 1033107 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 13:25:43.924284 1033107 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 13:25:43.931502 1033107 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 13:25:43.936000 1033107 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 13:25:44.264294 1033107 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 13:25:44.686846 1033107 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 13:25:45.259178 1033107 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 13:25:45.261295 1033107 kubeadm.go:318] 
	I1018 13:25:45.261391 1033107 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 13:25:45.261405 1033107 kubeadm.go:318] 
	I1018 13:25:45.261486 1033107 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 13:25:45.261495 1033107 kubeadm.go:318] 
	I1018 13:25:45.261526 1033107 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 13:25:45.262135 1033107 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 13:25:45.262205 1033107 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 13:25:45.262218 1033107 kubeadm.go:318] 
	I1018 13:25:45.262282 1033107 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 13:25:45.262294 1033107 kubeadm.go:318] 
	I1018 13:25:45.262345 1033107 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 13:25:45.262357 1033107 kubeadm.go:318] 
	I1018 13:25:45.262413 1033107 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 13:25:45.262496 1033107 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 13:25:45.262573 1033107 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 13:25:45.262584 1033107 kubeadm.go:318] 
	I1018 13:25:45.263039 1033107 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 13:25:45.263126 1033107 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 13:25:45.263131 1033107 kubeadm.go:318] 
	I1018 13:25:45.263558 1033107 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token o2n1lw.tasfvaewbbel2qie \
	I1018 13:25:45.263702 1033107 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 13:25:45.263926 1033107 kubeadm.go:318] 	--control-plane 
	I1018 13:25:45.263953 1033107 kubeadm.go:318] 
	I1018 13:25:45.264262 1033107 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 13:25:45.264280 1033107 kubeadm.go:318] 
	I1018 13:25:45.264607 1033107 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token o2n1lw.tasfvaewbbel2qie \
	I1018 13:25:45.264974 1033107 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 13:25:45.280911 1033107 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 13:25:45.281359 1033107 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 13:25:45.281489 1033107 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 13:25:45.281518 1033107 cni.go:84] Creating CNI manager for ""
	I1018 13:25:45.281532 1033107 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:25:45.285039 1033107 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 13:25:45.288989 1033107 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 13:25:45.298034 1033107 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 13:25:45.298112 1033107 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 13:25:45.335935 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 13:25:45.746371 1033107 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 13:25:45.746513 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:45.746605 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-208258 minikube.k8s.io/updated_at=2025_10_18T13_25_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=default-k8s-diff-port-208258 minikube.k8s.io/primary=true
	I1018 13:25:45.910385 1033107 ops.go:34] apiserver oom_adj: -16
	I1018 13:25:45.910483 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:46.410545 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:46.910723 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:47.410814 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:47.911130 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:48.410756 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:48.910906 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:49.411141 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:49.911552 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:50.411039 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:50.910726 1033107 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:25:51.061561 1033107 kubeadm.go:1113] duration metric: took 5.315109698s to wait for elevateKubeSystemPrivileges
	I1018 13:25:51.061594 1033107 kubeadm.go:402] duration metric: took 23.671699422s to StartCluster
	I1018 13:25:51.061612 1033107 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:51.061676 1033107 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:25:51.063290 1033107 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:25:51.063517 1033107 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:25:51.063964 1033107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 13:25:51.064344 1033107 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:25:51.064457 1033107 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:25:51.064532 1033107 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-208258"
	I1018 13:25:51.064552 1033107 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-208258"
	I1018 13:25:51.064583 1033107 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:25:51.065047 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:25:51.065303 1033107 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-208258"
	I1018 13:25:51.065324 1033107 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-208258"
	I1018 13:25:51.065579 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:25:51.069635 1033107 out.go:179] * Verifying Kubernetes components...
	I1018 13:25:51.079886 1033107 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:25:51.103371 1033107 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-208258"
	I1018 13:25:51.103415 1033107 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:25:51.104188 1033107 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:25:51.121915 1033107 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:25:51.125584 1033107 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:25:51.125611 1033107 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:25:51.125689 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:51.137801 1033107 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:25:51.137825 1033107 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:25:51.137904 1033107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:25:51.179566 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:51.179627 1033107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34182 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:25:51.453562 1033107 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:25:51.453763 1033107 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 13:25:51.465245 1033107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:25:51.529827 1033107 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:25:51.926165 1033107 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 13:25:51.930691 1033107 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-208258" to be "Ready" ...
	I1018 13:25:52.229776 1033107 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Oct 18 13:25:40 embed-certs-774829 crio[843]: time="2025-10-18T13:25:40.98518972Z" level=info msg="Created container 6e65612b8225c90952cb407a1391fda334ba98eee3804186072ed65e6aff6436: kube-system/coredns-66bc5c9577-ch4qs/coredns" id=436716d6-982f-41f0-a88e-ff3d40fcb217 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:25:40 embed-certs-774829 crio[843]: time="2025-10-18T13:25:40.988480886Z" level=info msg="Starting container: 6e65612b8225c90952cb407a1391fda334ba98eee3804186072ed65e6aff6436" id=1f89bde2-5a65-4055-892d-ceccf2552d9d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:25:41 embed-certs-774829 crio[843]: time="2025-10-18T13:25:41.006245893Z" level=info msg="Started container" PID=1745 containerID=6e65612b8225c90952cb407a1391fda334ba98eee3804186072ed65e6aff6436 description=kube-system/coredns-66bc5c9577-ch4qs/coredns id=1f89bde2-5a65-4055-892d-ceccf2552d9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b3e59ba45a1fd7a4219c30311e7bfaff7bf020890221e69d0b91bd9b3824b59
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.796279371Z" level=info msg="Running pod sandbox: default/busybox/POD" id=387fdb12-3bf6-4564-9fb6-1e5d46fee3b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.796369456Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.814140182Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4 UID:81334b31-a289-4f5b-8a24-8624dec0226c NetNS:/var/run/netns/b763b60a-9a2e-41ed-bbab-8048b0cfba59 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079570}] Aliases:map[]}"
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.814352558Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.834191337Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4 UID:81334b31-a289-4f5b-8a24-8624dec0226c NetNS:/var/run/netns/b763b60a-9a2e-41ed-bbab-8048b0cfba59 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079570}] Aliases:map[]}"
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.834344963Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.843844204Z" level=info msg="Ran pod sandbox 48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4 with infra container: default/busybox/POD" id=387fdb12-3bf6-4564-9fb6-1e5d46fee3b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.845663361Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8eba07c-6f37-464a-9e73-47ed32e082f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.84598174Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d8eba07c-6f37-464a-9e73-47ed32e082f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.846248024Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d8eba07c-6f37-464a-9e73-47ed32e082f6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.84965362Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f3de506f-6467-4366-a57d-aa13392955fe name=/runtime.v1.ImageService/PullImage
	Oct 18 13:25:44 embed-certs-774829 crio[843]: time="2025-10-18T13:25:44.85538365Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.136230118Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f3de506f-6467-4366-a57d-aa13392955fe name=/runtime.v1.ImageService/PullImage
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.136872776Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a84235a5-f483-4f55-9ad6-7698eeaf7a8a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.138203721Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a193944e-cc9f-4ae7-858e-b7435c7ef09f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.146644914Z" level=info msg="Creating container: default/busybox/busybox" id=5a059ee5-8980-4c7e-8fee-c8e909914b69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.147460169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.152181441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.152828661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.169613142Z" level=info msg="Created container 1d0a010fa67f28718ee50579eaea007339530ff5bd60f64dccc1c86e7e564587: default/busybox/busybox" id=5a059ee5-8980-4c7e-8fee-c8e909914b69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.17186542Z" level=info msg="Starting container: 1d0a010fa67f28718ee50579eaea007339530ff5bd60f64dccc1c86e7e564587" id=24a86ffe-49ee-47c9-b164-527a67d4c209 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:25:47 embed-certs-774829 crio[843]: time="2025-10-18T13:25:47.177310704Z" level=info msg="Started container" PID=1800 containerID=1d0a010fa67f28718ee50579eaea007339530ff5bd60f64dccc1c86e7e564587 description=default/busybox/busybox id=24a86ffe-49ee-47c9-b164-527a67d4c209 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	1d0a010fa67f2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   48c63260648b4       busybox                                      default
	6e65612b8225c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   0b3e59ba45a1f       coredns-66bc5c9577-ch4qs                     kube-system
	b4d91bdcbbe20       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago       Running             storage-provisioner       0                   c9056c6fa43d3       storage-provisioner                          kube-system
	566fa05d1cd73       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   562166bc0f5bd       kindnet-zvmhf                                kube-system
	45004a3d56bf5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   6fbb74b74fe1a       kube-proxy-vqgcc                             kube-system
	6088b1e9b84f1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   f4d283d7eab83       kube-scheduler-embed-certs-774829            kube-system
	2ff3e35058bbe       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   555cdd8135349       kube-apiserver-embed-certs-774829            kube-system
	db751b5343bc2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   eaeca5d7da90c       etcd-embed-certs-774829                      kube-system
	15b2cf192de72       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   94eaaab56b3f0       kube-controller-manager-embed-certs-774829   kube-system
	
	
	==> coredns [6e65612b8225c90952cb407a1391fda334ba98eee3804186072ed65e6aff6436] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44212 - 34232 "HINFO IN 8850132557148466804.6921817737313886393. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028077254s
	
	
	==> describe nodes <==
	Name:               embed-certs-774829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-774829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=embed-certs-774829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-774829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:25:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:25:55 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:25:55 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:25:55 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:25:55 +0000   Sat, 18 Oct 2025 13:25:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-774829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bbac08b8-1da7-4bdc-9a1e-0df1153ffa18
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-ch4qs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-774829                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-zvmhf                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-774829             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-774829    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-vqgcc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-774829             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-774829 event: Registered Node embed-certs-774829 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-774829 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 13:00] overlayfs: idmapped layers are currently not supported
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [db751b5343bc2b86de10145214ac504642aeb39ae6a9842eec77620a9a3b4e58] <==
	{"level":"warn","ts":"2025-10-18T13:24:49.092527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.107299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.143054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.146334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.165218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.185301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.224324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.231078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.286143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.308431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.333154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.345414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.363072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.391299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.402201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.425282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.472601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.491258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.516184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.527127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.566452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.592432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.629723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.659737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:24:49.806277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43996","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:25:56 up  5:08,  0 user,  load average: 2.98, 2.98, 2.49
	Linux embed-certs-774829 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [566fa05d1cd7392d41fbeb4f9b5bd61433db0b62f15b733cf285be0f0bbc6c4f] <==
	I1018 13:24:59.914424       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:24:59.916302       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:24:59.916438       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:24:59.916451       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:24:59.916464       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:25:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:25:00.145233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:25:00.145258       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:25:00.145269       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:25:00.145636       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:25:30.144839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:25:30.145155       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:25:30.146581       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:25:30.146778       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:25:31.747712       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:25:31.747743       1 metrics.go:72] Registering metrics
	I1018 13:25:31.747807       1 controller.go:711] "Syncing nftables rules"
	I1018 13:25:40.147778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:25:40.147828       1 main.go:301] handling current node
	I1018 13:25:50.144991       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:25:50.145125       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ff3e35058bbe37ae7a159082bde20561a567dfb14c5874a68a21589ba2e0410] <==
	I1018 13:24:50.998213       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:24:50.998220       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:24:50.998225       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:24:51.026133       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:24:51.026459       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 13:24:51.031821       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:24:51.048624       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:24:51.056460       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:24:51.583813       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:24:51.591028       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:24:51.591055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:24:52.530052       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:24:52.584264       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:24:52.719033       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:24:52.726329       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 13:24:52.727457       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:24:52.735029       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:24:53.014615       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:24:53.697037       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:24:53.724873       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:24:53.735293       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 13:24:58.717171       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:24:59.083570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:24:59.131542       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:24:59.154750       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [15b2cf192de7230521a5dee7392043c6783a98fc95234658f2f654bbe8154394] <==
	I1018 13:24:58.020346       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:24:58.020569       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:24:58.020618       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 13:24:58.020797       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 13:24:58.022600       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:24:58.029473       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:24:58.036789       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 13:24:58.040114       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:24:58.043109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 13:24:58.046729       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-774829" podCIDRs=["10.244.0.0/24"]
	I1018 13:24:58.046927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:24:58.050152       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 13:24:58.063474       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:24:58.063769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:24:58.063872       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:24:58.063932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:24:58.063974       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:24:58.064019       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:24:58.064021       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:24:58.064120       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 13:24:58.064198       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 13:24:58.064370       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:24:58.064064       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:24:58.070496       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 13:25:43.223240       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [45004a3d56bf57cfe551192b846f6e8961fdc4122ed23a0d7e11c4a85e7221c4] <==
	I1018 13:25:00.099141       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:25:00.461987       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:25:00.562975       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:25:00.563018       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:25:00.563093       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:25:00.627774       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:25:00.627849       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:25:00.645960       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:25:00.646310       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:25:00.646338       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:25:00.647727       1 config.go:200] "Starting service config controller"
	I1018 13:25:00.647748       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:25:00.660336       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:25:00.660427       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:25:00.660472       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:25:00.660497       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:25:00.661355       1 config.go:309] "Starting node config controller"
	I1018 13:25:00.661457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:25:00.661491       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:25:00.763107       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:25:00.763216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 13:25:00.763308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6088b1e9b84f1c31ad4d3645395683b90af0944d60e5b61309147336a2e49a4c] <==
	I1018 13:24:52.085568       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:24:52.087993       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:24:52.088099       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:24:52.089006       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:24:52.089095       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 13:24:52.100581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 13:24:52.100805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 13:24:52.114648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 13:24:52.115074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:24:52.115199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 13:24:52.115311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:24:52.115401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:24:52.115490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 13:24:52.115580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:24:52.115863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:24:52.116032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:24:52.116135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 13:24:52.116217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:24:52.116305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:24:52.116381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:24:52.116551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:24:52.116628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:24:52.116653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:24:52.117349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1018 13:24:53.088997       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:24:54 embed-certs-774829 kubelet[1320]: E1018 13:24:54.838328    1320 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-embed-certs-774829\" already exists" pod="kube-system/kube-controller-manager-embed-certs-774829"
	Oct 18 13:24:54 embed-certs-774829 kubelet[1320]: E1018 13:24:54.840502    1320 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-774829\" already exists" pod="kube-system/kube-apiserver-embed-certs-774829"
	Oct 18 13:24:54 embed-certs-774829 kubelet[1320]: I1018 13:24:54.845815    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-774829" podStartSLOduration=0.845793597 podStartE2EDuration="845.793597ms" podCreationTimestamp="2025-10-18 13:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:24:54.800901146 +0000 UTC m=+1.281753982" watchObservedRunningTime="2025-10-18 13:24:54.845793597 +0000 UTC m=+1.326646425"
	Oct 18 13:24:58 embed-certs-774829 kubelet[1320]: I1018 13:24:58.060239    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 13:24:58 embed-certs-774829 kubelet[1320]: I1018 13:24:58.061169    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.348354    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38917a63-de05-4840-9f65-146bd1ee0c38-kube-proxy\") pod \"kube-proxy-vqgcc\" (UID: \"38917a63-de05-4840-9f65-146bd1ee0c38\") " pod="kube-system/kube-proxy-vqgcc"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.348493    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38917a63-de05-4840-9f65-146bd1ee0c38-lib-modules\") pod \"kube-proxy-vqgcc\" (UID: \"38917a63-de05-4840-9f65-146bd1ee0c38\") " pod="kube-system/kube-proxy-vqgcc"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.348514    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7sc\" (UniqueName: \"kubernetes.io/projected/38917a63-de05-4840-9f65-146bd1ee0c38-kube-api-access-jv7sc\") pod \"kube-proxy-vqgcc\" (UID: \"38917a63-de05-4840-9f65-146bd1ee0c38\") " pod="kube-system/kube-proxy-vqgcc"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.348636    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/35253ced-a772-4d59-9bf2-fa186ea9b826-cni-cfg\") pod \"kindnet-zvmhf\" (UID: \"35253ced-a772-4d59-9bf2-fa186ea9b826\") " pod="kube-system/kindnet-zvmhf"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.348656    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35253ced-a772-4d59-9bf2-fa186ea9b826-xtables-lock\") pod \"kindnet-zvmhf\" (UID: \"35253ced-a772-4d59-9bf2-fa186ea9b826\") " pod="kube-system/kindnet-zvmhf"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.348674    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzb6d\" (UniqueName: \"kubernetes.io/projected/35253ced-a772-4d59-9bf2-fa186ea9b826-kube-api-access-wzb6d\") pod \"kindnet-zvmhf\" (UID: \"35253ced-a772-4d59-9bf2-fa186ea9b826\") " pod="kube-system/kindnet-zvmhf"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.350736    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38917a63-de05-4840-9f65-146bd1ee0c38-xtables-lock\") pod \"kube-proxy-vqgcc\" (UID: \"38917a63-de05-4840-9f65-146bd1ee0c38\") " pod="kube-system/kube-proxy-vqgcc"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.350925    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35253ced-a772-4d59-9bf2-fa186ea9b826-lib-modules\") pod \"kindnet-zvmhf\" (UID: \"35253ced-a772-4d59-9bf2-fa186ea9b826\") " pod="kube-system/kindnet-zvmhf"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.487524    1320 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 13:24:59 embed-certs-774829 kubelet[1320]: I1018 13:24:59.967199    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zvmhf" podStartSLOduration=0.966460166 podStartE2EDuration="966.460166ms" podCreationTimestamp="2025-10-18 13:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:24:59.966198681 +0000 UTC m=+6.447051526" watchObservedRunningTime="2025-10-18 13:24:59.966460166 +0000 UTC m=+6.447313002"
	Oct 18 13:25:00 embed-certs-774829 kubelet[1320]: I1018 13:25:00.892403    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vqgcc" podStartSLOduration=1.892385658 podStartE2EDuration="1.892385658s" podCreationTimestamp="2025-10-18 13:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:25:00.892106442 +0000 UTC m=+7.372959278" watchObservedRunningTime="2025-10-18 13:25:00.892385658 +0000 UTC m=+7.373238486"
	Oct 18 13:25:40 embed-certs-774829 kubelet[1320]: I1018 13:25:40.290554    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 13:25:40 embed-certs-774829 kubelet[1320]: I1018 13:25:40.552938    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb67ffa9-c63a-4daf-8325-e9b1e881202b-config-volume\") pod \"coredns-66bc5c9577-ch4qs\" (UID: \"cb67ffa9-c63a-4daf-8325-e9b1e881202b\") " pod="kube-system/coredns-66bc5c9577-ch4qs"
	Oct 18 13:25:40 embed-certs-774829 kubelet[1320]: I1018 13:25:40.552988    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b-tmp\") pod \"storage-provisioner\" (UID: \"1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b\") " pod="kube-system/storage-provisioner"
	Oct 18 13:25:40 embed-certs-774829 kubelet[1320]: I1018 13:25:40.553011    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4m25\" (UniqueName: \"kubernetes.io/projected/1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b-kube-api-access-s4m25\") pod \"storage-provisioner\" (UID: \"1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b\") " pod="kube-system/storage-provisioner"
	Oct 18 13:25:40 embed-certs-774829 kubelet[1320]: I1018 13:25:40.553038    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6gdf\" (UniqueName: \"kubernetes.io/projected/cb67ffa9-c63a-4daf-8325-e9b1e881202b-kube-api-access-h6gdf\") pod \"coredns-66bc5c9577-ch4qs\" (UID: \"cb67ffa9-c63a-4daf-8325-e9b1e881202b\") " pod="kube-system/coredns-66bc5c9577-ch4qs"
	Oct 18 13:25:42 embed-certs-774829 kubelet[1320]: I1018 13:25:42.007849    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.007824533 podStartE2EDuration="42.007824533s" podCreationTimestamp="2025-10-18 13:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:25:41.020896573 +0000 UTC m=+47.501749401" watchObservedRunningTime="2025-10-18 13:25:42.007824533 +0000 UTC m=+48.488677361"
	Oct 18 13:25:42 embed-certs-774829 kubelet[1320]: I1018 13:25:42.048833    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ch4qs" podStartSLOduration=43.048812787 podStartE2EDuration="43.048812787s" podCreationTimestamp="2025-10-18 13:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:25:42.016895461 +0000 UTC m=+48.497748297" watchObservedRunningTime="2025-10-18 13:25:42.048812787 +0000 UTC m=+48.529665615"
	Oct 18 13:25:44 embed-certs-774829 kubelet[1320]: I1018 13:25:44.590258    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn5mn\" (UniqueName: \"kubernetes.io/projected/81334b31-a289-4f5b-8a24-8624dec0226c-kube-api-access-tn5mn\") pod \"busybox\" (UID: \"81334b31-a289-4f5b-8a24-8624dec0226c\") " pod="default/busybox"
	Oct 18 13:25:44 embed-certs-774829 kubelet[1320]: W1018 13:25:44.836649    1320 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/crio-48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4 WatchSource:0}: Error finding container 48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4: Status 404 returned error can't find the container with id 48c63260648b41d85a851c91624c14968f307504fcec2479a1265e474a2b30a4
	
	
	==> storage-provisioner [b4d91bdcbbe2009af512dd39f6888628d8c34cb705f3d4505d4225de790c033b] <==
	I1018 13:25:40.953815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:25:41.005905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:25:41.006189       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:25:41.032589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:41.040439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:25:41.040712       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:25:41.041154       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e819a57-5518-4431-a3ad-90de48f83d9c", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-774829_2c68f1d7-4f14-49e2-84ec-42ad2d8e0cbd became leader
	I1018 13:25:41.043148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-774829_2c68f1d7-4f14-49e2-84ec-42ad2d8e0cbd!
	W1018 13:25:41.071855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:41.079321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:25:41.144145       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-774829_2c68f1d7-4f14-49e2-84ec-42ad2d8e0cbd!
	W1018 13:25:43.082407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:43.087349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:45.093427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:45.114685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:47.118415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:47.123349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:49.126132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:49.130926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:51.137735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:51.160154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:53.163713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:53.168156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:55.172596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:25:55.178472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-774829 -n embed-certs-774829
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-774829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (301.053389ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:26:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-208258 describe deploy/metrics-server -n kube-system: exit status 1 (87.920932ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-208258 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-208258
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-208258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae",
	        "Created": "2025-10-18T13:25:16.393417854Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1033497,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:25:16.490727433Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/hostname",
	        "HostsPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/hosts",
	        "LogPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae-json.log",
	        "Name": "/default-k8s-diff-port-208258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-208258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-208258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae",
	                "LowerDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-208258",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-208258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-208258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-208258",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-208258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c92628046c1ec353842b8c906290973001e2f692f307f402cf76f6fc7a318d3",
	            "SandboxKey": "/var/run/docker/netns/5c92628046c1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34182"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34183"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34186"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34184"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34185"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-208258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:92:79:5e:25:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "842f84fb2288b37127c8c8891c93fb974e3c77a976754988e22ee941caac1ff0",
	                    "EndpointID": "c4befc8a682ebf8e589de495735fc841cc82390e5db51891158125d20078011d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-208258",
	                        "43668e797f9a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-208258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-208258 logs -n 25: (1.250805704s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ start   │ -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:21 UTC │ 18 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-460322 image list --format=json                                                                                                                                                                                               │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ pause   │ -p old-k8s-version-460322 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │                     │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                                                                                     │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                                                                                     │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:26:09
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:26:09.499083 1036440 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:26:09.499202 1036440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:26:09.499213 1036440 out.go:374] Setting ErrFile to fd 2...
	I1018 13:26:09.499218 1036440 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:26:09.499569 1036440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:26:09.500277 1036440 out.go:368] Setting JSON to false
	I1018 13:26:09.501323 1036440 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18522,"bootTime":1760775448,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:26:09.501425 1036440 start.go:141] virtualization:  
	I1018 13:26:09.504784 1036440 out.go:179] * [embed-certs-774829] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:26:09.508802 1036440 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:26:09.508997 1036440 notify.go:220] Checking for updates...
	I1018 13:26:09.512760 1036440 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:26:09.515715 1036440 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:26:09.518642 1036440 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:26:09.521631 1036440 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:26:09.524544 1036440 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:26:09.527981 1036440 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:26:09.528559 1036440 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:26:09.559292 1036440 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:26:09.559422 1036440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:26:09.618569 1036440 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:26:09.608696765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:26:09.618680 1036440 docker.go:318] overlay module found
	I1018 13:26:09.623798 1036440 out.go:179] * Using the docker driver based on existing profile
	I1018 13:26:09.626736 1036440 start.go:305] selected driver: docker
	I1018 13:26:09.626767 1036440 start.go:925] validating driver "docker" against &{Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:26:09.626921 1036440 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:26:09.627778 1036440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:26:09.683493 1036440 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:26:09.673736042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:26:09.683872 1036440 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:26:09.683898 1036440 cni.go:84] Creating CNI manager for ""
	I1018 13:26:09.683953 1036440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:26:09.684002 1036440 start.go:349] cluster config:
	{Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:26:09.687272 1036440 out.go:179] * Starting "embed-certs-774829" primary control-plane node in "embed-certs-774829" cluster
	I1018 13:26:09.690043 1036440 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:26:09.693014 1036440 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:26:09.696039 1036440 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:26:09.696059 1036440 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:26:09.696114 1036440 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:26:09.696123 1036440 cache.go:58] Caching tarball of preloaded images
	I1018 13:26:09.696224 1036440 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:26:09.696235 1036440 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:26:09.696455 1036440 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json ...
	I1018 13:26:09.718110 1036440 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:26:09.718134 1036440 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:26:09.718151 1036440 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:26:09.718229 1036440 start.go:360] acquireMachinesLock for embed-certs-774829: {Name:mk5aa8563d93509fb0e97633ae4ffa1630655c85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:26:09.718311 1036440 start.go:364] duration metric: took 52.357µs to acquireMachinesLock for "embed-certs-774829"
	I1018 13:26:09.718337 1036440 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:26:09.718349 1036440 fix.go:54] fixHost starting: 
	I1018 13:26:09.718608 1036440 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:26:09.737886 1036440 fix.go:112] recreateIfNeeded on embed-certs-774829: state=Stopped err=<nil>
	W1018 13:26:09.737925 1036440 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 13:26:07.434431 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:09.934446 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	I1018 13:26:09.741164 1036440 out.go:252] * Restarting existing docker container for "embed-certs-774829" ...
	I1018 13:26:09.741252 1036440 cli_runner.go:164] Run: docker start embed-certs-774829
	I1018 13:26:10.021231 1036440 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:26:10.058752 1036440 kic.go:430] container "embed-certs-774829" state is running.
	I1018 13:26:10.059163 1036440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:26:10.085138 1036440 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/config.json ...
	I1018 13:26:10.085397 1036440 machine.go:93] provisionDockerMachine start ...
	I1018 13:26:10.085468 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:10.112928 1036440 main.go:141] libmachine: Using SSH client type: native
	I1018 13:26:10.113276 1036440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1018 13:26:10.113287 1036440 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:26:10.115393 1036440 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44530->127.0.0.1:34187: read: connection reset by peer
	I1018 13:26:13.263974 1036440 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-774829
	
	I1018 13:26:13.263999 1036440 ubuntu.go:182] provisioning hostname "embed-certs-774829"
	I1018 13:26:13.264072 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:13.283610 1036440 main.go:141] libmachine: Using SSH client type: native
	I1018 13:26:13.284078 1036440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1018 13:26:13.284096 1036440 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-774829 && echo "embed-certs-774829" | sudo tee /etc/hostname
	I1018 13:26:13.446829 1036440 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-774829
	
	I1018 13:26:13.446930 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:13.465340 1036440 main.go:141] libmachine: Using SSH client type: native
	I1018 13:26:13.465675 1036440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1018 13:26:13.465700 1036440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-774829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-774829/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-774829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:26:13.616508 1036440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:26:13.616539 1036440 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:26:13.616563 1036440 ubuntu.go:190] setting up certificates
	I1018 13:26:13.616572 1036440 provision.go:84] configureAuth start
	I1018 13:26:13.616640 1036440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:26:13.634830 1036440 provision.go:143] copyHostCerts
	I1018 13:26:13.634912 1036440 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:26:13.634932 1036440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:26:13.635011 1036440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:26:13.635122 1036440 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:26:13.635134 1036440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:26:13.635164 1036440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:26:13.635234 1036440 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:26:13.635243 1036440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:26:13.635272 1036440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:26:13.635360 1036440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.embed-certs-774829 san=[127.0.0.1 192.168.76.2 embed-certs-774829 localhost minikube]
	W1018 13:26:12.433711 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:14.436659 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	I1018 13:26:15.028331 1036440 provision.go:177] copyRemoteCerts
	I1018 13:26:15.028451 1036440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:26:15.028587 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:15.060095 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:15.172014 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:26:15.193876 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 13:26:15.214226 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 13:26:15.233649 1036440 provision.go:87] duration metric: took 1.617053558s to configureAuth
	I1018 13:26:15.233678 1036440 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:26:15.233869 1036440 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:26:15.233985 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:15.252985 1036440 main.go:141] libmachine: Using SSH client type: native
	I1018 13:26:15.253307 1036440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34187 <nil> <nil>}
	I1018 13:26:15.253327 1036440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:26:15.595544 1036440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:26:15.595631 1036440 machine.go:96] duration metric: took 5.510223151s to provisionDockerMachine
	I1018 13:26:15.595711 1036440 start.go:293] postStartSetup for "embed-certs-774829" (driver="docker")
	I1018 13:26:15.595741 1036440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:26:15.595823 1036440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:26:15.595902 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:15.620651 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:15.728376 1036440 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:26:15.732672 1036440 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:26:15.732705 1036440 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:26:15.732717 1036440 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:26:15.732771 1036440 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:26:15.732867 1036440 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:26:15.732978 1036440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:26:15.741110 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:26:15.762210 1036440 start.go:296] duration metric: took 166.466022ms for postStartSetup
	I1018 13:26:15.762294 1036440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:26:15.762344 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:15.782622 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:15.884954 1036440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:26:15.889902 1036440 fix.go:56] duration metric: took 6.171537597s for fixHost
	I1018 13:26:15.889927 1036440 start.go:83] releasing machines lock for "embed-certs-774829", held for 6.17160099s
	I1018 13:26:15.890005 1036440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-774829
	I1018 13:26:15.910031 1036440 ssh_runner.go:195] Run: cat /version.json
	I1018 13:26:15.910090 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:15.910375 1036440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:26:15.910458 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:15.935182 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:15.945647 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:16.043699 1036440 ssh_runner.go:195] Run: systemctl --version
	I1018 13:26:16.136963 1036440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:26:16.175827 1036440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:26:16.181122 1036440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:26:16.181207 1036440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:26:16.190705 1036440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:26:16.190776 1036440 start.go:495] detecting cgroup driver to use...
	I1018 13:26:16.190826 1036440 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:26:16.190925 1036440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:26:16.207166 1036440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:26:16.223816 1036440 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:26:16.223932 1036440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:26:16.240542 1036440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:26:16.254781 1036440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:26:16.378459 1036440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:26:16.509611 1036440 docker.go:234] disabling docker service ...
	I1018 13:26:16.509709 1036440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:26:16.525784 1036440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:26:16.538876 1036440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:26:16.669337 1036440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:26:16.795138 1036440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:26:16.812437 1036440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:26:16.827960 1036440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:26:16.828062 1036440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.839417 1036440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:26:16.839529 1036440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.849569 1036440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.858497 1036440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.868800 1036440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:26:16.877928 1036440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.890306 1036440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.900744 1036440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:26:16.910326 1036440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:26:16.919248 1036440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:26:16.927155 1036440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:26:17.054873 1036440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:26:17.201180 1036440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:26:17.201252 1036440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:26:17.205667 1036440 start.go:563] Will wait 60s for crictl version
	I1018 13:26:17.205753 1036440 ssh_runner.go:195] Run: which crictl
	I1018 13:26:17.211377 1036440 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:26:17.236116 1036440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:26:17.236219 1036440 ssh_runner.go:195] Run: crio --version
	I1018 13:26:17.265148 1036440 ssh_runner.go:195] Run: crio --version
	I1018 13:26:17.300693 1036440 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:26:17.303482 1036440 cli_runner.go:164] Run: docker network inspect embed-certs-774829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:26:17.320228 1036440 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 13:26:17.324314 1036440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:26:17.334030 1036440 kubeadm.go:883] updating cluster {Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:26:17.334159 1036440 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:26:17.334211 1036440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:26:17.367311 1036440 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:26:17.367338 1036440 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:26:17.367407 1036440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:26:17.397916 1036440 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:26:17.397940 1036440 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:26:17.397949 1036440 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 13:26:17.398065 1036440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-774829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:26:17.398150 1036440 ssh_runner.go:195] Run: crio config
	I1018 13:26:17.467897 1036440 cni.go:84] Creating CNI manager for ""
	I1018 13:26:17.467932 1036440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:26:17.467949 1036440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:26:17.467972 1036440 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-774829 NodeName:embed-certs-774829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:26:17.468126 1036440 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-774829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:26:17.468225 1036440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:26:17.477139 1036440 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:26:17.477240 1036440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:26:17.485109 1036440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 13:26:17.498608 1036440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:26:17.512048 1036440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 13:26:17.526890 1036440 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:26:17.530619 1036440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:26:17.541563 1036440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:26:17.656247 1036440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:26:17.673693 1036440 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829 for IP: 192.168.76.2
	I1018 13:26:17.673716 1036440 certs.go:195] generating shared ca certs ...
	I1018 13:26:17.673734 1036440 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:26:17.673912 1036440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:26:17.673983 1036440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:26:17.673999 1036440 certs.go:257] generating profile certs ...
	I1018 13:26:17.674115 1036440 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/client.key
	I1018 13:26:17.674248 1036440 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key.971cb07f
	I1018 13:26:17.674318 1036440 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key
	I1018 13:26:17.674461 1036440 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:26:17.674525 1036440 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:26:17.674541 1036440 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:26:17.674579 1036440 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:26:17.674624 1036440 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:26:17.674654 1036440 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:26:17.674721 1036440 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:26:17.675411 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:26:17.696674 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:26:17.715472 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:26:17.734128 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:26:17.756059 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 13:26:17.774462 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:26:17.794127 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:26:17.814175 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/embed-certs-774829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 13:26:17.836620 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:26:17.859182 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:26:17.882084 1036440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:26:17.905271 1036440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:26:17.930127 1036440 ssh_runner.go:195] Run: openssl version
	I1018 13:26:17.942208 1036440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:26:17.951115 1036440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:26:17.955323 1036440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:26:17.955389 1036440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:26:17.999932 1036440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:26:18.011715 1036440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:26:18.032283 1036440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:26:18.045135 1036440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:26:18.045284 1036440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:26:18.108277 1036440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:26:18.117332 1036440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:26:18.126108 1036440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:26:18.131223 1036440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:26:18.131311 1036440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:26:18.172717 1036440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:26:18.181149 1036440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:26:18.185115 1036440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:26:18.226410 1036440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:26:18.267524 1036440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:26:18.310117 1036440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:26:18.359768 1036440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:26:18.431035 1036440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:26:18.503667 1036440 kubeadm.go:400] StartCluster: {Name:embed-certs-774829 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-774829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:26:18.503810 1036440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:26:18.503914 1036440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:26:18.572149 1036440 cri.go:89] found id: "a43c33d591b5ab9bb0ab2cf0448a86a485b202dc1d02bb68cae0cb40cd379794"
	I1018 13:26:18.572224 1036440 cri.go:89] found id: "7920a44c552e4c5e2ad627678ddd2e1ca5f62a7398b052140a83b7d76c068d6e"
	I1018 13:26:18.572257 1036440 cri.go:89] found id: "fa361f5a5688b380f5f99d0c7c6b08eba214e61325f08a0579323568e2dc4974"
	I1018 13:26:18.572295 1036440 cri.go:89] found id: "c9201764369f43ee1bb5e0a3d7d47a5bff8966959e69a3db59c9b1d1b71735b1"
	I1018 13:26:18.572316 1036440 cri.go:89] found id: ""
	I1018 13:26:18.572402 1036440 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:26:18.593515 1036440 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:26:18Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:26:18.593674 1036440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:26:18.609236 1036440 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:26:18.609300 1036440 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:26:18.609384 1036440 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:26:18.622303 1036440 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:26:18.622944 1036440 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-774829" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:26:18.623272 1036440 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-774829" cluster setting kubeconfig missing "embed-certs-774829" context setting]
	I1018 13:26:18.623820 1036440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:26:18.625467 1036440 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:26:18.642263 1036440 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 13:26:18.642340 1036440 kubeadm.go:601] duration metric: took 33.01943ms to restartPrimaryControlPlane
	I1018 13:26:18.642365 1036440 kubeadm.go:402] duration metric: took 138.721927ms to StartCluster
	I1018 13:26:18.642410 1036440 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:26:18.642495 1036440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:26:18.643921 1036440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:26:18.644229 1036440 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:26:18.644408 1036440 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:26:18.644858 1036440 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-774829"
	I1018 13:26:18.644893 1036440 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-774829"
	W1018 13:26:18.644928 1036440 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:26:18.644976 1036440 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:26:18.645502 1036440 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:26:18.645703 1036440 addons.go:69] Setting dashboard=true in profile "embed-certs-774829"
	I1018 13:26:18.645740 1036440 addons.go:238] Setting addon dashboard=true in "embed-certs-774829"
	W1018 13:26:18.645924 1036440 addons.go:247] addon dashboard should already be in state true
	I1018 13:26:18.645974 1036440 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:26:18.646036 1036440 addons.go:69] Setting default-storageclass=true in profile "embed-certs-774829"
	I1018 13:26:18.646050 1036440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-774829"
	I1018 13:26:18.646315 1036440 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:26:18.646833 1036440 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:26:18.651884 1036440 out.go:179] * Verifying Kubernetes components...
	I1018 13:26:18.644638 1036440 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:26:18.657293 1036440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:26:18.708781 1036440 addons.go:238] Setting addon default-storageclass=true in "embed-certs-774829"
	W1018 13:26:18.708809 1036440 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:26:18.708837 1036440 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:26:18.709282 1036440 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:26:18.715130 1036440 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:26:18.719062 1036440 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:26:18.719087 1036440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:26:18.719155 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:18.733678 1036440 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 13:26:18.739786 1036440 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 13:26:18.742684 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 13:26:18.742708 1036440 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 13:26:18.742782 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:18.760861 1036440 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:26:18.760883 1036440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:26:18.760951 1036440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:26:18.769297 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:18.807905 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:18.809664 1036440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:26:19.020416 1036440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:26:19.048872 1036440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:26:19.054313 1036440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:26:19.078430 1036440 node_ready.go:35] waiting up to 6m0s for node "embed-certs-774829" to be "Ready" ...
	I1018 13:26:19.117969 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 13:26:19.118042 1036440 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 13:26:19.185927 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 13:26:19.186006 1036440 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 13:26:19.245779 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:26:19.245845 1036440 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:26:19.329248 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:26:19.329323 1036440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:26:19.355392 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:26:19.355472 1036440 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:26:19.371347 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:26:19.371431 1036440 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:26:19.401563 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:26:19.401648 1036440 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:26:19.421398 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:26:19.421474 1036440 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:26:19.457408 1036440 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:26:19.457488 1036440 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:26:19.483056 1036440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 13:26:16.934810 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:19.434224 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	I1018 13:26:23.400369 1036440 node_ready.go:49] node "embed-certs-774829" is "Ready"
	I1018 13:26:23.400397 1036440 node_ready.go:38] duration metric: took 4.321885701s for node "embed-certs-774829" to be "Ready" ...
	I1018 13:26:23.400410 1036440 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:26:23.400469 1036440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:26:25.252058 1036440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.203153906s)
	I1018 13:26:25.252128 1036440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.197741264s)
	I1018 13:26:25.307243 1036440 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.824092447s)
	I1018 13:26:25.307401 1036440 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.906921106s)
	I1018 13:26:25.307423 1036440 api_server.go:72] duration metric: took 6.662749707s to wait for apiserver process to appear ...
	I1018 13:26:25.307429 1036440 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:26:25.307446 1036440 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:26:25.310400 1036440 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-774829 addons enable metrics-server
	
	I1018 13:26:25.313232 1036440 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1018 13:26:21.434564 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:23.933968 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	I1018 13:26:25.316863 1036440 addons.go:514] duration metric: took 6.672446224s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 13:26:25.319115 1036440 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:26:25.319145 1036440 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:26:25.808331 1036440 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:26:25.819354 1036440 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:26:25.820811 1036440 api_server.go:141] control plane version: v1.34.1
	I1018 13:26:25.820852 1036440 api_server.go:131] duration metric: took 513.416532ms to wait for apiserver health ...
	I1018 13:26:25.820877 1036440 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:26:25.825101 1036440 system_pods.go:59] 8 kube-system pods found
	I1018 13:26:25.825153 1036440 system_pods.go:61] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:25.825178 1036440 system_pods.go:61] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:26:25.825190 1036440 system_pods.go:61] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:26:25.825201 1036440 system_pods.go:61] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:26:25.825238 1036440 system_pods.go:61] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:26:25.825251 1036440 system_pods.go:61] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:26:25.825258 1036440 system_pods.go:61] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:26:25.825263 1036440 system_pods.go:61] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Running
	I1018 13:26:25.825275 1036440 system_pods.go:74] duration metric: took 4.383855ms to wait for pod list to return data ...
	I1018 13:26:25.825301 1036440 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:26:25.828456 1036440 default_sa.go:45] found service account: "default"
	I1018 13:26:25.828524 1036440 default_sa.go:55] duration metric: took 3.209621ms for default service account to be created ...
	I1018 13:26:25.828549 1036440 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:26:25.831929 1036440 system_pods.go:86] 8 kube-system pods found
	I1018 13:26:25.832010 1036440 system_pods.go:89] "coredns-66bc5c9577-ch4qs" [cb67ffa9-c63a-4daf-8325-e9b1e881202b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:25.832028 1036440 system_pods.go:89] "etcd-embed-certs-774829" [9f1be190-65b9-4a3c-b28f-8825f55b27ad] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:26:25.832036 1036440 system_pods.go:89] "kindnet-zvmhf" [35253ced-a772-4d59-9bf2-fa186ea9b826] Running
	I1018 13:26:25.832044 1036440 system_pods.go:89] "kube-apiserver-embed-certs-774829" [ecdc9b0c-6a1c-4e04-8d3f-657c19221fc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:26:25.832051 1036440 system_pods.go:89] "kube-controller-manager-embed-certs-774829" [ef36fc69-de6f-45c4-bb95-3598d91b04d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:26:25.832055 1036440 system_pods.go:89] "kube-proxy-vqgcc" [38917a63-de05-4840-9f65-146bd1ee0c38] Running
	I1018 13:26:25.832069 1036440 system_pods.go:89] "kube-scheduler-embed-certs-774829" [9f67d8ca-e2b7-4a2b-a73a-4d7210be4990] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:26:25.832074 1036440 system_pods.go:89] "storage-provisioner" [1d20f2f9-ccfb-42bf-bfe3-4f4c2b97b91b] Running
	I1018 13:26:25.832089 1036440 system_pods.go:126] duration metric: took 3.519927ms to wait for k8s-apps to be running ...
	I1018 13:26:25.832098 1036440 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:26:25.832161 1036440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:26:25.846470 1036440 system_svc.go:56] duration metric: took 14.361484ms WaitForService to wait for kubelet
	I1018 13:26:25.846501 1036440 kubeadm.go:586] duration metric: took 7.201825679s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:26:25.846522 1036440 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:26:25.849624 1036440 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:26:25.849710 1036440 node_conditions.go:123] node cpu capacity is 2
	I1018 13:26:25.849733 1036440 node_conditions.go:105] duration metric: took 3.204912ms to run NodePressure ...
	I1018 13:26:25.849746 1036440 start.go:241] waiting for startup goroutines ...
	I1018 13:26:25.849753 1036440 start.go:246] waiting for cluster config update ...
	I1018 13:26:25.849764 1036440 start.go:255] writing updated cluster config ...
	I1018 13:26:25.850082 1036440 ssh_runner.go:195] Run: rm -f paused
	I1018 13:26:25.853711 1036440 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:26:25.859154 1036440 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch4qs" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 13:26:27.866360 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	W1018 13:26:25.934143 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:28.433654 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:30.433963 1033107 node_ready.go:57] node "default-k8s-diff-port-208258" has "Ready":"False" status (will retry)
	W1018 13:26:30.365937 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	W1018 13:26:32.871525 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	I1018 13:26:31.934570 1033107 node_ready.go:49] node "default-k8s-diff-port-208258" is "Ready"
	I1018 13:26:31.934598 1033107 node_ready.go:38] duration metric: took 40.0038393s for node "default-k8s-diff-port-208258" to be "Ready" ...
	I1018 13:26:31.934610 1033107 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:26:31.934666 1033107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:26:31.948901 1033107 api_server.go:72] duration metric: took 40.885344027s to wait for apiserver process to appear ...
	I1018 13:26:31.948925 1033107 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:26:31.948945 1033107 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 13:26:31.962361 1033107 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1018 13:26:31.965100 1033107 api_server.go:141] control plane version: v1.34.1
	I1018 13:26:31.965126 1033107 api_server.go:131] duration metric: took 16.194309ms to wait for apiserver health ...
	I1018 13:26:31.965135 1033107 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:26:31.972760 1033107 system_pods.go:59] 8 kube-system pods found
	I1018 13:26:31.972843 1033107 system_pods.go:61] "coredns-66bc5c9577-2g4gz" [66edc35b-17da-44f9-93e0-c3178017ebd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:31.972868 1033107 system_pods.go:61] "etcd-default-k8s-diff-port-208258" [f9d70fe3-edd0-4a7b-8974-89ff801e48f9] Running
	I1018 13:26:31.972911 1033107 system_pods.go:61] "kindnet-4l67c" [2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b] Running
	I1018 13:26:31.972934 1033107 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-208258" [b83f8704-260a-42db-8272-297d0acdba03] Running
	I1018 13:26:31.972953 1033107 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-208258" [4fc41d78-33ae-4779-b133-6472807a0b28] Running
	I1018 13:26:31.972975 1033107 system_pods.go:61] "kube-proxy-q5bvt" [6398b812-78fd-404d-97b0-222ee6a40671] Running
	I1018 13:26:31.972996 1033107 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-208258" [c1be7446-fd2a-4f61-a46f-56ad81858926] Running
	I1018 13:26:31.973031 1033107 system_pods.go:61] "storage-provisioner" [c808af22-0cac-4ac1-bfe8-8d3338c7d048] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:26:31.973051 1033107 system_pods.go:74] duration metric: took 7.910921ms to wait for pod list to return data ...
	I1018 13:26:31.973073 1033107 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:26:31.976381 1033107 default_sa.go:45] found service account: "default"
	I1018 13:26:31.976450 1033107 default_sa.go:55] duration metric: took 3.354715ms for default service account to be created ...
	I1018 13:26:31.976473 1033107 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 13:26:31.981253 1033107 system_pods.go:86] 8 kube-system pods found
	I1018 13:26:31.981342 1033107 system_pods.go:89] "coredns-66bc5c9577-2g4gz" [66edc35b-17da-44f9-93e0-c3178017ebd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:31.981365 1033107 system_pods.go:89] "etcd-default-k8s-diff-port-208258" [f9d70fe3-edd0-4a7b-8974-89ff801e48f9] Running
	I1018 13:26:31.981405 1033107 system_pods.go:89] "kindnet-4l67c" [2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b] Running
	I1018 13:26:31.981430 1033107 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-208258" [b83f8704-260a-42db-8272-297d0acdba03] Running
	I1018 13:26:31.981451 1033107 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-208258" [4fc41d78-33ae-4779-b133-6472807a0b28] Running
	I1018 13:26:31.981471 1033107 system_pods.go:89] "kube-proxy-q5bvt" [6398b812-78fd-404d-97b0-222ee6a40671] Running
	I1018 13:26:31.981492 1033107 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-208258" [c1be7446-fd2a-4f61-a46f-56ad81858926] Running
	I1018 13:26:31.981534 1033107 system_pods.go:89] "storage-provisioner" [c808af22-0cac-4ac1-bfe8-8d3338c7d048] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:26:31.981576 1033107 retry.go:31] will retry after 270.375987ms: missing components: kube-dns
	I1018 13:26:32.256553 1033107 system_pods.go:86] 8 kube-system pods found
	I1018 13:26:32.256641 1033107 system_pods.go:89] "coredns-66bc5c9577-2g4gz" [66edc35b-17da-44f9-93e0-c3178017ebd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:32.256664 1033107 system_pods.go:89] "etcd-default-k8s-diff-port-208258" [f9d70fe3-edd0-4a7b-8974-89ff801e48f9] Running
	I1018 13:26:32.256705 1033107 system_pods.go:89] "kindnet-4l67c" [2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b] Running
	I1018 13:26:32.256734 1033107 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-208258" [b83f8704-260a-42db-8272-297d0acdba03] Running
	I1018 13:26:32.256754 1033107 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-208258" [4fc41d78-33ae-4779-b133-6472807a0b28] Running
	I1018 13:26:32.256777 1033107 system_pods.go:89] "kube-proxy-q5bvt" [6398b812-78fd-404d-97b0-222ee6a40671] Running
	I1018 13:26:32.256810 1033107 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-208258" [c1be7446-fd2a-4f61-a46f-56ad81858926] Running
	I1018 13:26:32.256845 1033107 system_pods.go:89] "storage-provisioner" [c808af22-0cac-4ac1-bfe8-8d3338c7d048] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:26:32.256876 1033107 retry.go:31] will retry after 275.079407ms: missing components: kube-dns
	I1018 13:26:32.537119 1033107 system_pods.go:86] 8 kube-system pods found
	I1018 13:26:32.537224 1033107 system_pods.go:89] "coredns-66bc5c9577-2g4gz" [66edc35b-17da-44f9-93e0-c3178017ebd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:32.537248 1033107 system_pods.go:89] "etcd-default-k8s-diff-port-208258" [f9d70fe3-edd0-4a7b-8974-89ff801e48f9] Running
	I1018 13:26:32.537283 1033107 system_pods.go:89] "kindnet-4l67c" [2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b] Running
	I1018 13:26:32.537307 1033107 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-208258" [b83f8704-260a-42db-8272-297d0acdba03] Running
	I1018 13:26:32.537326 1033107 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-208258" [4fc41d78-33ae-4779-b133-6472807a0b28] Running
	I1018 13:26:32.537347 1033107 system_pods.go:89] "kube-proxy-q5bvt" [6398b812-78fd-404d-97b0-222ee6a40671] Running
	I1018 13:26:32.537368 1033107 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-208258" [c1be7446-fd2a-4f61-a46f-56ad81858926] Running
	I1018 13:26:32.537405 1033107 system_pods.go:89] "storage-provisioner" [c808af22-0cac-4ac1-bfe8-8d3338c7d048] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:26:32.537434 1033107 retry.go:31] will retry after 321.74303ms: missing components: kube-dns
	I1018 13:26:32.864834 1033107 system_pods.go:86] 8 kube-system pods found
	I1018 13:26:32.864869 1033107 system_pods.go:89] "coredns-66bc5c9577-2g4gz" [66edc35b-17da-44f9-93e0-c3178017ebd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 13:26:32.864877 1033107 system_pods.go:89] "etcd-default-k8s-diff-port-208258" [f9d70fe3-edd0-4a7b-8974-89ff801e48f9] Running
	I1018 13:26:32.864884 1033107 system_pods.go:89] "kindnet-4l67c" [2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b] Running
	I1018 13:26:32.864888 1033107 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-208258" [b83f8704-260a-42db-8272-297d0acdba03] Running
	I1018 13:26:32.864910 1033107 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-208258" [4fc41d78-33ae-4779-b133-6472807a0b28] Running
	I1018 13:26:32.864914 1033107 system_pods.go:89] "kube-proxy-q5bvt" [6398b812-78fd-404d-97b0-222ee6a40671] Running
	I1018 13:26:32.864919 1033107 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-208258" [c1be7446-fd2a-4f61-a46f-56ad81858926] Running
	I1018 13:26:32.864930 1033107 system_pods.go:89] "storage-provisioner" [c808af22-0cac-4ac1-bfe8-8d3338c7d048] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 13:26:32.864953 1033107 retry.go:31] will retry after 388.197807ms: missing components: kube-dns
	I1018 13:26:33.267569 1033107 system_pods.go:86] 8 kube-system pods found
	I1018 13:26:33.267619 1033107 system_pods.go:89] "coredns-66bc5c9577-2g4gz" [66edc35b-17da-44f9-93e0-c3178017ebd6] Running
	I1018 13:26:33.267635 1033107 system_pods.go:89] "etcd-default-k8s-diff-port-208258" [f9d70fe3-edd0-4a7b-8974-89ff801e48f9] Running
	I1018 13:26:33.267715 1033107 system_pods.go:89] "kindnet-4l67c" [2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b] Running
	I1018 13:26:33.267733 1033107 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-208258" [b83f8704-260a-42db-8272-297d0acdba03] Running
	I1018 13:26:33.267739 1033107 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-208258" [4fc41d78-33ae-4779-b133-6472807a0b28] Running
	I1018 13:26:33.267749 1033107 system_pods.go:89] "kube-proxy-q5bvt" [6398b812-78fd-404d-97b0-222ee6a40671] Running
	I1018 13:26:33.267758 1033107 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-208258" [c1be7446-fd2a-4f61-a46f-56ad81858926] Running
	I1018 13:26:33.267768 1033107 system_pods.go:89] "storage-provisioner" [c808af22-0cac-4ac1-bfe8-8d3338c7d048] Running
	I1018 13:26:33.267778 1033107 system_pods.go:126] duration metric: took 1.291285487s to wait for k8s-apps to be running ...
	I1018 13:26:33.267800 1033107 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 13:26:33.267889 1033107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:26:33.293286 1033107 system_svc.go:56] duration metric: took 25.469496ms WaitForService to wait for kubelet
	I1018 13:26:33.293346 1033107 kubeadm.go:586] duration metric: took 42.229793788s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:26:33.293386 1033107 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:26:33.300456 1033107 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:26:33.300509 1033107 node_conditions.go:123] node cpu capacity is 2
	I1018 13:26:33.300537 1033107 node_conditions.go:105] duration metric: took 7.137332ms to run NodePressure ...
	I1018 13:26:33.300561 1033107 start.go:241] waiting for startup goroutines ...
	I1018 13:26:33.300588 1033107 start.go:246] waiting for cluster config update ...
	I1018 13:26:33.300613 1033107 start.go:255] writing updated cluster config ...
	I1018 13:26:33.301034 1033107 ssh_runner.go:195] Run: rm -f paused
	I1018 13:26:33.307122 1033107 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:26:33.314952 1033107 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2g4gz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.323481 1033107 pod_ready.go:94] pod "coredns-66bc5c9577-2g4gz" is "Ready"
	I1018 13:26:33.323517 1033107 pod_ready.go:86] duration metric: took 8.535158ms for pod "coredns-66bc5c9577-2g4gz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.327587 1033107 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.334269 1033107 pod_ready.go:94] pod "etcd-default-k8s-diff-port-208258" is "Ready"
	I1018 13:26:33.334316 1033107 pod_ready.go:86] duration metric: took 6.68369ms for pod "etcd-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.337760 1033107 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.344840 1033107 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-208258" is "Ready"
	I1018 13:26:33.344866 1033107 pod_ready.go:86] duration metric: took 7.071075ms for pod "kube-apiserver-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.347914 1033107 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.714516 1033107 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-208258" is "Ready"
	I1018 13:26:33.714548 1033107 pod_ready.go:86] duration metric: took 366.597394ms for pod "kube-controller-manager-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:33.913990 1033107 pod_ready.go:83] waiting for pod "kube-proxy-q5bvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:34.314011 1033107 pod_ready.go:94] pod "kube-proxy-q5bvt" is "Ready"
	I1018 13:26:34.314041 1033107 pod_ready.go:86] duration metric: took 400.017646ms for pod "kube-proxy-q5bvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:34.514625 1033107 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:34.913420 1033107 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-208258" is "Ready"
	I1018 13:26:34.913458 1033107 pod_ready.go:86] duration metric: took 398.803929ms for pod "kube-scheduler-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:26:34.913470 1033107 pod_ready.go:40] duration metric: took 1.606300476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:26:35.023875 1033107 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:26:35.028158 1033107 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-208258" cluster and "default" namespace by default
	W1018 13:26:35.364529 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	W1018 13:26:37.364996 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	W1018 13:26:39.865298 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	W1018 13:26:41.865642 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	W1018 13:26:44.364904 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 13:26:32 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:32.491521348Z" level=info msg="Created container 34876f587cd9591264b023cee1697cb73d00d08dcbf7d342ed13e5951d42ff31: kube-system/coredns-66bc5c9577-2g4gz/coredns" id=80b541fa-62a2-4b01-a8b2-53764b4d1944 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:32 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:32.49752818Z" level=info msg="Starting container: 34876f587cd9591264b023cee1697cb73d00d08dcbf7d342ed13e5951d42ff31" id=f153d9a2-6182-4f05-b42d-0eacd1afd8a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:26:32 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:32.508049002Z" level=info msg="Started container" PID=1759 containerID=34876f587cd9591264b023cee1697cb73d00d08dcbf7d342ed13e5951d42ff31 description=kube-system/coredns-66bc5c9577-2g4gz/coredns id=f153d9a2-6182-4f05-b42d-0eacd1afd8a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2da28d177fd56bf1cb38574da81f40677071553645cafe439de285d813972b34
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.618894104Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b6fde6da-f1d3-48d6-a3cd-b50d810e0e4b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.618975942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.624889211Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb UID:efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40 NetNS:/var/run/netns/1d848525-fd03-4deb-9c03-12d963ede71a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012f748}] Aliases:map[]}"
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.625092372Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.644261896Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb UID:efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40 NetNS:/var/run/netns/1d848525-fd03-4deb-9c03-12d963ede71a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012f748}] Aliases:map[]}"
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.645692477Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.65927905Z" level=info msg="Ran pod sandbox 07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb with infra container: default/busybox/POD" id=b6fde6da-f1d3-48d6-a3cd-b50d810e0e4b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.665281173Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b34fa27f-74ab-44ba-af11-889e5342cd4b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.665433741Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b34fa27f-74ab-44ba-af11-889e5342cd4b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.665472519Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b34fa27f-74ab-44ba-af11-889e5342cd4b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.670671259Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c48c8f17-bd97-451d-b0a1-778e2097f796 name=/runtime.v1.ImageService/PullImage
	Oct 18 13:26:35 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:35.675748571Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.338119681Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c48c8f17-bd97-451d-b0a1-778e2097f796 name=/runtime.v1.ImageService/PullImage
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.339273319Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2d49e7d-a6ff-4432-8d6a-d1fdaa73c9fa name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.342582216Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3360de38-cded-43a5-8a26-0bbd8f3a21ac name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.349239074Z" level=info msg="Creating container: default/busybox/busybox" id=852610d2-6102-4b7e-b60c-c0daa702df79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.350058752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.358938463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.359560944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.378015899Z" level=info msg="Created container 56085d61946ccfbe9d9ead912b898d96525066e8cc7dbbbe707acf0e0d04dbd6: default/busybox/busybox" id=852610d2-6102-4b7e-b60c-c0daa702df79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.378864271Z" level=info msg="Starting container: 56085d61946ccfbe9d9ead912b898d96525066e8cc7dbbbe707acf0e0d04dbd6" id=cf3e0076-3292-4fb2-8c43-1682c948ccc6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:26:38 default-k8s-diff-port-208258 crio[839]: time="2025-10-18T13:26:38.384610948Z" level=info msg="Started container" PID=1819 containerID=56085d61946ccfbe9d9ead912b898d96525066e8cc7dbbbe707acf0e0d04dbd6 description=default/busybox/busybox id=cf3e0076-3292-4fb2-8c43-1682c948ccc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	56085d61946cc       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   07b2c4125505f       busybox                                                default
	34876f587cd95       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   2da28d177fd56       coredns-66bc5c9577-2g4gz                               kube-system
	198410ee04c77       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   6c47f9a7ce063       storage-provisioner                                    kube-system
	acb71b2c59462       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   5af5f4cae69ec       kube-proxy-q5bvt                                       kube-system
	c0f535102a1be       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   52191b7d46904       kindnet-4l67c                                          kube-system
	28ade1f855183       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   08648579dd829       kube-scheduler-default-k8s-diff-port-208258            kube-system
	4e0b3062f6273       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   c5ebbb538ab04       kube-apiserver-default-k8s-diff-port-208258            kube-system
	4b20c278a4a5e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6c3654a4b3ff2       etcd-default-k8s-diff-port-208258                      kube-system
	e514aa16f7681       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   9267cd8f6aa3a       kube-controller-manager-default-k8s-diff-port-208258   kube-system
	
	
	==> coredns [34876f587cd9591264b023cee1697cb73d00d08dcbf7d342ed13e5951d42ff31] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55808 - 57870 "HINFO IN 5263500784541979730.8901045954447785371. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0159109s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-208258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-208258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-208258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_25_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:25:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-208258
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:26:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:26:45 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:26:45 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:26:45 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:26:45 +0000   Sat, 18 Oct 2025 13:26:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-208258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                248dcf9c-de96-4df7-a92b-ba98e54e1b6e
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-2g4gz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-208258                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-4l67c                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-208258             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-208258    200m (10%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-proxy-q5bvt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-208258             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-208258 event: Registered Node default-k8s-diff-port-208258 in Controller
	  Normal   NodeReady                16s                kubelet          Node default-k8s-diff-port-208258 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 13:01] overlayfs: idmapped layers are currently not supported
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4b20c278a4a5eed3c025c1a1ef0c4f5e368bcf62e907f45e37c4ab1be4a59cdb] <==
	{"level":"warn","ts":"2025-10-18T13:25:40.010789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.035365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.049921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.069448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.087222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.105077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.124433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.157851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.171844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.191787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.210408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.229092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.245325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.264055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.278926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.339125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.342399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.384575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.418302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.435590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.513164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.548486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.562237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.582157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:25:40.652906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:26:47 up  5:09,  0 user,  load average: 2.55, 2.83, 2.46
	Linux default-k8s-diff-port-208258 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c0f535102a1be3da761caf97c9a831d442c7c0fc2f634481f20d0f59eacb41f8] <==
	I1018 13:25:51.316407       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:25:51.318431       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:25:51.318559       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:25:51.318570       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:25:51.318581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:25:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:25:51.518671       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:25:51.518690       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:25:51.518704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:25:51.519454       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:26:21.518848       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:26:21.519173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:26:21.519283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:26:21.519483       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:26:22.718793       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:26:22.718830       1 metrics.go:72] Registering metrics
	I1018 13:26:22.718891       1 controller.go:711] "Syncing nftables rules"
	I1018 13:26:31.527956       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:26:31.528003       1 main.go:301] handling current node
	I1018 13:26:41.519245       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:26:41.519279       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4e0b3062f6273a0f6aebd0576982e61aaa2326f72cf61a45f8d8488c2ac76863] <==
	I1018 13:25:41.811832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:25:41.817786       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:25:41.817851       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 13:25:41.828962       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:25:41.829094       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:25:41.845153       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:25:42.503250       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:25:42.510795       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:25:42.510877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:25:43.495412       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:25:43.575791       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:25:43.710854       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:25:43.723052       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 13:25:43.724389       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:25:43.729954       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:25:44.636819       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:25:44.663580       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:25:44.685633       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:25:44.720330       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 13:25:50.286797       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:25:50.294687       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:25:50.334731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:25:50.683588       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 13:25:50.683588       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 13:26:45.500754       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:34862: use of closed network connection
	
	
	==> kube-controller-manager [e514aa16f768182fdfd59ed3fbd1ee69e88981a1236860dc388b9026e5b07a72] <==
	I1018 13:25:49.661337       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:25:49.661364       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:25:49.661373       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:25:49.661380       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:25:49.670347       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 13:25:49.671102       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-208258" podCIDRs=["10.244.0.0/24"]
	I1018 13:25:49.673353       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 13:25:49.673488       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 13:25:49.673582       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-208258"
	I1018 13:25:49.673359       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:25:49.673645       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 13:25:49.675707       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:25:49.676922       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:25:49.677170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 13:25:49.678118       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:25:49.678514       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 13:25:49.678981       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 13:25:49.680850       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:25:49.686627       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:25:49.686769       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 13:25:49.687702       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:25:49.689603       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:25:49.695709       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:25:49.696948       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:26:34.680285       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [acb71b2c594625f42414643a0f6d967df4c226d8e9b031ceec3d22e497e8c50f] <==
	I1018 13:25:51.360604       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:25:51.457369       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:25:51.558070       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:25:51.558108       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 13:25:51.558180       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:25:51.670829       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:25:51.671712       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:25:51.690326       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:25:51.690626       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:25:51.690641       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:25:51.691748       1 config.go:200] "Starting service config controller"
	I1018 13:25:51.691760       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:25:51.709956       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:25:51.709984       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:25:51.710014       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:25:51.710019       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:25:51.710441       1 config.go:309] "Starting node config controller"
	I1018 13:25:51.710449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:25:51.791905       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:25:51.811096       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:25:51.811153       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 13:25:51.818409       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [28ade1f8551830f3e63997a074145d23378e851ee915d6320a16bd8f448ac7d4] <==
	I1018 13:25:40.078031       1 serving.go:386] Generated self-signed cert in-memory
	W1018 13:25:43.218028       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 13:25:43.218066       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 13:25:43.218110       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 13:25:43.218119       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 13:25:43.260105       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:25:43.260136       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:25:43.262514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:25:43.262610       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:25:43.263800       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:25:43.263905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 13:25:43.269345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 13:25:44.563337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:25:49 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:49.694296    1322 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823517    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b-xtables-lock\") pod \"kindnet-4l67c\" (UID: \"2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b\") " pod="kube-system/kindnet-4l67c"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823574    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcn2n\" (UniqueName: \"kubernetes.io/projected/6398b812-78fd-404d-97b0-222ee6a40671-kube-api-access-hcn2n\") pod \"kube-proxy-q5bvt\" (UID: \"6398b812-78fd-404d-97b0-222ee6a40671\") " pod="kube-system/kube-proxy-q5bvt"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823603    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwxlc\" (UniqueName: \"kubernetes.io/projected/2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b-kube-api-access-rwxlc\") pod \"kindnet-4l67c\" (UID: \"2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b\") " pod="kube-system/kindnet-4l67c"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823630    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6398b812-78fd-404d-97b0-222ee6a40671-kube-proxy\") pod \"kube-proxy-q5bvt\" (UID: \"6398b812-78fd-404d-97b0-222ee6a40671\") " pod="kube-system/kube-proxy-q5bvt"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823704    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b-cni-cfg\") pod \"kindnet-4l67c\" (UID: \"2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b\") " pod="kube-system/kindnet-4l67c"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823726    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b-lib-modules\") pod \"kindnet-4l67c\" (UID: \"2b2bc26e-59de-40f3-8b1c-d9bb43eaa20b\") " pod="kube-system/kindnet-4l67c"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823745    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6398b812-78fd-404d-97b0-222ee6a40671-lib-modules\") pod \"kube-proxy-q5bvt\" (UID: \"6398b812-78fd-404d-97b0-222ee6a40671\") " pod="kube-system/kube-proxy-q5bvt"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.823763    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6398b812-78fd-404d-97b0-222ee6a40671-xtables-lock\") pod \"kube-proxy-q5bvt\" (UID: \"6398b812-78fd-404d-97b0-222ee6a40671\") " pod="kube-system/kube-proxy-q5bvt"
	Oct 18 13:25:50 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:50.946243    1322 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 13:25:51 default-k8s-diff-port-208258 kubelet[1322]: W1018 13:25:51.051238    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-5af5f4cae69ec8a5438a7fac65190537cd049bbc9edb6dbda33a5352c29a027f WatchSource:0}: Error finding container 5af5f4cae69ec8a5438a7fac65190537cd049bbc9edb6dbda33a5352c29a027f: Status 404 returned error can't find the container with id 5af5f4cae69ec8a5438a7fac65190537cd049bbc9edb6dbda33a5352c29a027f
	Oct 18 13:25:51 default-k8s-diff-port-208258 kubelet[1322]: W1018 13:25:51.053039    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-52191b7d46904bd01143e361125b440fafa986f40eaffeb44a264e2e21357717 WatchSource:0}: Error finding container 52191b7d46904bd01143e361125b440fafa986f40eaffeb44a264e2e21357717: Status 404 returned error can't find the container with id 52191b7d46904bd01143e361125b440fafa986f40eaffeb44a264e2e21357717
	Oct 18 13:25:52 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:52.079034    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5bvt" podStartSLOduration=2.079012998 podStartE2EDuration="2.079012998s" podCreationTimestamp="2025-10-18 13:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:25:52.048206334 +0000 UTC m=+7.440259316" watchObservedRunningTime="2025-10-18 13:25:52.079012998 +0000 UTC m=+7.471065988"
	Oct 18 13:25:52 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:25:52.456953    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4l67c" podStartSLOduration=2.4569355489999998 podStartE2EDuration="2.456935549s" podCreationTimestamp="2025-10-18 13:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:25:52.079964706 +0000 UTC m=+7.472017704" watchObservedRunningTime="2025-10-18 13:25:52.456935549 +0000 UTC m=+7.848988539"
	Oct 18 13:26:31 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:31.733355    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 13:26:31 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:31.960803    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c808af22-0cac-4ac1-bfe8-8d3338c7d048-tmp\") pod \"storage-provisioner\" (UID: \"c808af22-0cac-4ac1-bfe8-8d3338c7d048\") " pod="kube-system/storage-provisioner"
	Oct 18 13:26:31 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:31.960927    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88n9\" (UniqueName: \"kubernetes.io/projected/c808af22-0cac-4ac1-bfe8-8d3338c7d048-kube-api-access-l88n9\") pod \"storage-provisioner\" (UID: \"c808af22-0cac-4ac1-bfe8-8d3338c7d048\") " pod="kube-system/storage-provisioner"
	Oct 18 13:26:31 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:31.960953    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66edc35b-17da-44f9-93e0-c3178017ebd6-config-volume\") pod \"coredns-66bc5c9577-2g4gz\" (UID: \"66edc35b-17da-44f9-93e0-c3178017ebd6\") " pod="kube-system/coredns-66bc5c9577-2g4gz"
	Oct 18 13:26:31 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:31.961020    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p477t\" (UniqueName: \"kubernetes.io/projected/66edc35b-17da-44f9-93e0-c3178017ebd6-kube-api-access-p477t\") pod \"coredns-66bc5c9577-2g4gz\" (UID: \"66edc35b-17da-44f9-93e0-c3178017ebd6\") " pod="kube-system/coredns-66bc5c9577-2g4gz"
	Oct 18 13:26:32 default-k8s-diff-port-208258 kubelet[1322]: W1018 13:26:32.437306    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-2da28d177fd56bf1cb38574da81f40677071553645cafe439de285d813972b34 WatchSource:0}: Error finding container 2da28d177fd56bf1cb38574da81f40677071553645cafe439de285d813972b34: Status 404 returned error can't find the container with id 2da28d177fd56bf1cb38574da81f40677071553645cafe439de285d813972b34
	Oct 18 13:26:33 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:33.197722    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2g4gz" podStartSLOduration=43.197687205 podStartE2EDuration="43.197687205s" podCreationTimestamp="2025-10-18 13:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:26:33.16729674 +0000 UTC m=+48.559349796" watchObservedRunningTime="2025-10-18 13:26:33.197687205 +0000 UTC m=+48.589740187"
	Oct 18 13:26:35 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:35.309401    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.309381607 podStartE2EDuration="43.309381607s" podCreationTimestamp="2025-10-18 13:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:26:33.254317416 +0000 UTC m=+48.646370398" watchObservedRunningTime="2025-10-18 13:26:35.309381607 +0000 UTC m=+50.701434589"
	Oct 18 13:26:35 default-k8s-diff-port-208258 kubelet[1322]: I1018 13:26:35.395298    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzqcn\" (UniqueName: \"kubernetes.io/projected/efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40-kube-api-access-bzqcn\") pod \"busybox\" (UID: \"efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40\") " pod="default/busybox"
	Oct 18 13:26:35 default-k8s-diff-port-208258 kubelet[1322]: W1018 13:26:35.655757    1322 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb WatchSource:0}: Error finding container 07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb: Status 404 returned error can't find the container with id 07b2c4125505fbe612bbda4c5658d37ce07abf110e8bb2e4ab3824f249bca1eb
	Oct 18 13:26:45 default-k8s-diff-port-208258 kubelet[1322]: E1018 13:26:45.504047    1322 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44996->127.0.0.1:39171: write tcp 127.0.0.1:44996->127.0.0.1:39171: write: connection reset by peer
	
	
	==> storage-provisioner [198410ee04c770f44199f43bb6cfce09abe864197401eac892734d933b2b86db] <==
	I1018 13:26:32.523538       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:26:32.553731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:26:32.553907       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:26:32.557207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:32.569630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:26:32.569876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:26:32.598724       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-208258_dd3399f7-068b-4215-b87e-ecbd29a436e1!
	I1018 13:26:32.600703       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"884d1ecb-78ac-42d2-b717-b442ddc99282", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-208258_dd3399f7-068b-4215-b87e-ecbd29a436e1 became leader
	W1018 13:26:32.603967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:32.620108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:26:32.700607       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-208258_dd3399f7-068b-4215-b87e-ecbd29a436e1!
	W1018 13:26:34.623222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:34.630302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:36.633555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:36.638568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:38.641320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:38.646438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:40.649278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:40.654082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:42.657983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:42.664923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:44.669407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:44.674205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:46.678141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:46.683820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-774829 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-774829 --alsologtostderr -v=1: exit status 80 (2.42375332s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-774829 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:27:13.737631 1041068 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:27:13.737881 1041068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:13.737907 1041068 out.go:374] Setting ErrFile to fd 2...
	I1018 13:27:13.737928 1041068 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:13.738218 1041068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:27:13.738505 1041068 out.go:368] Setting JSON to false
	I1018 13:27:13.738549 1041068 mustload.go:65] Loading cluster: embed-certs-774829
	I1018 13:27:13.739048 1041068 config.go:182] Loaded profile config "embed-certs-774829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:13.739558 1041068 cli_runner.go:164] Run: docker container inspect embed-certs-774829 --format={{.State.Status}}
	I1018 13:27:13.756472 1041068 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:27:13.756794 1041068 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:13.858004 1041068 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:27:13.843716353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:13.858687 1041068 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-774829 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 13:27:13.862076 1041068 out.go:179] * Pausing node embed-certs-774829 ... 
	I1018 13:27:13.865779 1041068 host.go:66] Checking if "embed-certs-774829" exists ...
	I1018 13:27:13.866114 1041068 ssh_runner.go:195] Run: systemctl --version
	I1018 13:27:13.866156 1041068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-774829
	I1018 13:27:13.900359 1041068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34187 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/embed-certs-774829/id_rsa Username:docker}
	I1018 13:27:14.015877 1041068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:27:14.051878 1041068 pause.go:52] kubelet running: true
	I1018 13:27:14.051948 1041068 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:27:14.442846 1041068 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:27:14.442931 1041068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:27:14.557386 1041068 cri.go:89] found id: "79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287"
	I1018 13:27:14.557446 1041068 cri.go:89] found id: "d56bd96894eec0d3969c06a8d8d1d0bf5187a978e2c0b7959b860634b1d1353a"
	I1018 13:27:14.557466 1041068 cri.go:89] found id: "01dfb2bcdca8f72f569ed8490d352da5859334740e43e096120f437c0d4ad559"
	I1018 13:27:14.557487 1041068 cri.go:89] found id: "3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92"
	I1018 13:27:14.557509 1041068 cri.go:89] found id: "deb79053d475ceade5869b7a5c80b59e86ff337adc487a96c4db827d88d518dd"
	I1018 13:27:14.557533 1041068 cri.go:89] found id: "a43c33d591b5ab9bb0ab2cf0448a86a485b202dc1d02bb68cae0cb40cd379794"
	I1018 13:27:14.557552 1041068 cri.go:89] found id: "7920a44c552e4c5e2ad627678ddd2e1ca5f62a7398b052140a83b7d76c068d6e"
	I1018 13:27:14.557571 1041068 cri.go:89] found id: "fa361f5a5688b380f5f99d0c7c6b08eba214e61325f08a0579323568e2dc4974"
	I1018 13:27:14.557591 1041068 cri.go:89] found id: "c9201764369f43ee1bb5e0a3d7d47a5bff8966959e69a3db59c9b1d1b71735b1"
	I1018 13:27:14.557614 1041068 cri.go:89] found id: "8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	I1018 13:27:14.557634 1041068 cri.go:89] found id: "2b8230f4d1bb2af92d33d63d23eda6f397401cdbcc30e1fe9bcc5378a56e47d5"
	I1018 13:27:14.557653 1041068 cri.go:89] found id: ""
	I1018 13:27:14.557730 1041068 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:27:14.579783 1041068 retry.go:31] will retry after 170.317653ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:27:14Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:27:14.751249 1041068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:27:14.773758 1041068 pause.go:52] kubelet running: false
	I1018 13:27:14.773864 1041068 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:27:15.059447 1041068 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:27:15.059602 1041068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:27:15.203792 1041068 cri.go:89] found id: "79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287"
	I1018 13:27:15.203854 1041068 cri.go:89] found id: "d56bd96894eec0d3969c06a8d8d1d0bf5187a978e2c0b7959b860634b1d1353a"
	I1018 13:27:15.203874 1041068 cri.go:89] found id: "01dfb2bcdca8f72f569ed8490d352da5859334740e43e096120f437c0d4ad559"
	I1018 13:27:15.203894 1041068 cri.go:89] found id: "3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92"
	I1018 13:27:15.203914 1041068 cri.go:89] found id: "deb79053d475ceade5869b7a5c80b59e86ff337adc487a96c4db827d88d518dd"
	I1018 13:27:15.203934 1041068 cri.go:89] found id: "a43c33d591b5ab9bb0ab2cf0448a86a485b202dc1d02bb68cae0cb40cd379794"
	I1018 13:27:15.203953 1041068 cri.go:89] found id: "7920a44c552e4c5e2ad627678ddd2e1ca5f62a7398b052140a83b7d76c068d6e"
	I1018 13:27:15.203974 1041068 cri.go:89] found id: "fa361f5a5688b380f5f99d0c7c6b08eba214e61325f08a0579323568e2dc4974"
	I1018 13:27:15.203992 1041068 cri.go:89] found id: "c9201764369f43ee1bb5e0a3d7d47a5bff8966959e69a3db59c9b1d1b71735b1"
	I1018 13:27:15.204018 1041068 cri.go:89] found id: "8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	I1018 13:27:15.204049 1041068 cri.go:89] found id: "2b8230f4d1bb2af92d33d63d23eda6f397401cdbcc30e1fe9bcc5378a56e47d5"
	I1018 13:27:15.204070 1041068 cri.go:89] found id: ""
	I1018 13:27:15.204163 1041068 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:27:15.219569 1041068 retry.go:31] will retry after 495.340525ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:27:15Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:27:15.716052 1041068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:27:15.730528 1041068 pause.go:52] kubelet running: false
	I1018 13:27:15.730594 1041068 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:27:15.969356 1041068 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:27:15.969440 1041068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:27:16.045916 1041068 cri.go:89] found id: "79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287"
	I1018 13:27:16.045940 1041068 cri.go:89] found id: "d56bd96894eec0d3969c06a8d8d1d0bf5187a978e2c0b7959b860634b1d1353a"
	I1018 13:27:16.045945 1041068 cri.go:89] found id: "01dfb2bcdca8f72f569ed8490d352da5859334740e43e096120f437c0d4ad559"
	I1018 13:27:16.045949 1041068 cri.go:89] found id: "3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92"
	I1018 13:27:16.045953 1041068 cri.go:89] found id: "deb79053d475ceade5869b7a5c80b59e86ff337adc487a96c4db827d88d518dd"
	I1018 13:27:16.045956 1041068 cri.go:89] found id: "a43c33d591b5ab9bb0ab2cf0448a86a485b202dc1d02bb68cae0cb40cd379794"
	I1018 13:27:16.045959 1041068 cri.go:89] found id: "7920a44c552e4c5e2ad627678ddd2e1ca5f62a7398b052140a83b7d76c068d6e"
	I1018 13:27:16.045962 1041068 cri.go:89] found id: "fa361f5a5688b380f5f99d0c7c6b08eba214e61325f08a0579323568e2dc4974"
	I1018 13:27:16.045965 1041068 cri.go:89] found id: "c9201764369f43ee1bb5e0a3d7d47a5bff8966959e69a3db59c9b1d1b71735b1"
	I1018 13:27:16.045972 1041068 cri.go:89] found id: "8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	I1018 13:27:16.045976 1041068 cri.go:89] found id: "2b8230f4d1bb2af92d33d63d23eda6f397401cdbcc30e1fe9bcc5378a56e47d5"
	I1018 13:27:16.045979 1041068 cri.go:89] found id: ""
	I1018 13:27:16.046031 1041068 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:27:16.061786 1041068 out.go:203] 
	W1018 13:27:16.064580 1041068 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:27:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:27:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 13:27:16.064619 1041068 out.go:285] * 
	* 
	W1018 13:27:16.071975 1041068 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 13:27:16.074915 1041068 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-774829 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-774829
helpers_test.go:243: (dbg) docker inspect embed-certs-774829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5",
	        "Created": "2025-10-18T13:24:26.79427098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1036568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:26:09.778608724Z",
	            "FinishedAt": "2025-10-18T13:26:08.920688126Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/hosts",
	        "LogPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5-json.log",
	        "Name": "/embed-certs-774829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-774829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-774829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5",
	                "LowerDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-774829",
	                "Source": "/var/lib/docker/volumes/embed-certs-774829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-774829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-774829",
	                "name.minikube.sigs.k8s.io": "embed-certs-774829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d535ea2ce2f63f81bac12e38f8c956ad12e74300f2ada4b54ad2ea62a0a41d48",
	            "SandboxKey": "/var/run/docker/netns/d535ea2ce2f6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-774829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:7d:ce:0e:91:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e311031c6dc9b74f7ff8e4ce1a369f0cc1a288a1b5c06ece89bfc9abebacd083",
	                    "EndpointID": "e83eb546e467834b57a259101d9b8547098ad5b78c7c78261132188b2cdafa6f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-774829",
	                        "43d79c77c4e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829: exit status 2 (425.314088ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-774829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-774829 logs -n 25: (1.418847957s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                              │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                               │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                              │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                     │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                     │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                          │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                             │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                              │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                             │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:27:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:27:00.645184 1039404 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:27:00.645346 1039404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:00.645358 1039404 out.go:374] Setting ErrFile to fd 2...
	I1018 13:27:00.645363 1039404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:00.645641 1039404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:27:00.646030 1039404 out.go:368] Setting JSON to false
	I1018 13:27:00.647004 1039404 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18573,"bootTime":1760775448,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:27:00.647072 1039404 start.go:141] virtualization:  
	I1018 13:27:00.650750 1039404 out.go:179] * [default-k8s-diff-port-208258] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:27:00.654640 1039404 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:27:00.654781 1039404 notify.go:220] Checking for updates...
	I1018 13:27:00.660624 1039404 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:27:00.663526 1039404 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:00.666499 1039404 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:27:00.669303 1039404 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:27:00.672221 1039404 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:27:00.675789 1039404 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:00.676408 1039404 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:27:00.708620 1039404 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:27:00.708742 1039404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:00.765901 1039404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:27:00.755186267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:00.766722 1039404 docker.go:318] overlay module found
	I1018 13:27:00.769872 1039404 out.go:179] * Using the docker driver based on existing profile
	I1018 13:27:00.772709 1039404 start.go:305] selected driver: docker
	I1018 13:27:00.772729 1039404 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:00.772835 1039404 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:27:00.773601 1039404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:00.850845 1039404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:27:00.83695915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:00.851198 1039404 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:27:00.851226 1039404 cni.go:84] Creating CNI manager for ""
	I1018 13:27:00.851283 1039404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:00.851324 1039404 start.go:349] cluster config:
	{Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:00.856314 1039404 out.go:179] * Starting "default-k8s-diff-port-208258" primary control-plane node in "default-k8s-diff-port-208258" cluster
	I1018 13:27:00.859090 1039404 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:27:00.861956 1039404 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:27:00.864738 1039404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:00.864793 1039404 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:27:00.864807 1039404 cache.go:58] Caching tarball of preloaded images
	I1018 13:27:00.864819 1039404 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:27:00.864904 1039404 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:27:00.864914 1039404 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:27:00.865017 1039404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json ...
	I1018 13:27:00.883990 1039404 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:27:00.884014 1039404 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:27:00.884032 1039404 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:27:00.884060 1039404 start.go:360] acquireMachinesLock for default-k8s-diff-port-208258: {Name:mk1489085c407b0af704e7c70968afb6ecaa3acb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:27:00.884124 1039404 start.go:364] duration metric: took 39.532µs to acquireMachinesLock for "default-k8s-diff-port-208258"
	I1018 13:27:00.884148 1039404 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:27:00.884157 1039404 fix.go:54] fixHost starting: 
	I1018 13:27:00.884436 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:00.901260 1039404 fix.go:112] recreateIfNeeded on default-k8s-diff-port-208258: state=Stopped err=<nil>
	W1018 13:27:00.901292 1039404 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 13:26:59.868551 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	I1018 13:27:00.382004 1036440 pod_ready.go:94] pod "coredns-66bc5c9577-ch4qs" is "Ready"
	I1018 13:27:00.382030 1036440 pod_ready.go:86] duration metric: took 34.522844504s for pod "coredns-66bc5c9577-ch4qs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.393041 1036440 pod_ready.go:83] waiting for pod "etcd-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.406654 1036440 pod_ready.go:94] pod "etcd-embed-certs-774829" is "Ready"
	I1018 13:27:00.406686 1036440 pod_ready.go:86] duration metric: took 13.618506ms for pod "etcd-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.410125 1036440 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.418753 1036440 pod_ready.go:94] pod "kube-apiserver-embed-certs-774829" is "Ready"
	I1018 13:27:00.418789 1036440 pod_ready.go:86] duration metric: took 8.632873ms for pod "kube-apiserver-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.422466 1036440 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.567477 1036440 pod_ready.go:94] pod "kube-controller-manager-embed-certs-774829" is "Ready"
	I1018 13:27:00.567510 1036440 pod_ready.go:86] duration metric: took 144.897639ms for pod "kube-controller-manager-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.766682 1036440 pod_ready.go:83] waiting for pod "kube-proxy-vqgcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.167045 1036440 pod_ready.go:94] pod "kube-proxy-vqgcc" is "Ready"
	I1018 13:27:01.167072 1036440 pod_ready.go:86] duration metric: took 400.372503ms for pod "kube-proxy-vqgcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.367964 1036440 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.766510 1036440 pod_ready.go:94] pod "kube-scheduler-embed-certs-774829" is "Ready"
	I1018 13:27:01.766541 1036440 pod_ready.go:86] duration metric: took 398.546676ms for pod "kube-scheduler-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.766553 1036440 pod_ready.go:40] duration metric: took 35.912807172s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:27:01.827946 1036440 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:27:01.831646 1036440 out.go:179] * Done! kubectl is now configured to use "embed-certs-774829" cluster and "default" namespace by default
	I1018 13:27:00.904565 1039404 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-208258" ...
	I1018 13:27:00.904666 1039404 cli_runner.go:164] Run: docker start default-k8s-diff-port-208258
	I1018 13:27:01.186896 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:01.212788 1039404 kic.go:430] container "default-k8s-diff-port-208258" state is running.
	I1018 13:27:01.213191 1039404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:27:01.244331 1039404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json ...
	I1018 13:27:01.244571 1039404 machine.go:93] provisionDockerMachine start ...
	I1018 13:27:01.244630 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:01.269127 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:01.269443 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:01.269455 1039404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:27:01.270528 1039404 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:27:04.419554 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-208258
	
	I1018 13:27:04.419580 1039404 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-208258"
	I1018 13:27:04.419683 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:04.441288 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:04.441642 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:04.441672 1039404 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-208258 && echo "default-k8s-diff-port-208258" | sudo tee /etc/hostname
	I1018 13:27:04.607234 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-208258
	
	I1018 13:27:04.607339 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:04.626445 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:04.626786 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:04.626810 1039404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-208258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-208258/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-208258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:27:04.779973 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:27:04.780017 1039404 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:27:04.780046 1039404 ubuntu.go:190] setting up certificates
	I1018 13:27:04.780060 1039404 provision.go:84] configureAuth start
	I1018 13:27:04.780123 1039404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:27:04.800059 1039404 provision.go:143] copyHostCerts
	I1018 13:27:04.800141 1039404 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:27:04.800158 1039404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:27:04.800244 1039404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:27:04.800368 1039404 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:27:04.800381 1039404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:27:04.800417 1039404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:27:04.800487 1039404 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:27:04.800495 1039404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:27:04.800522 1039404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:27:04.800586 1039404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-208258 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-208258 localhost minikube]
	I1018 13:27:05.072368 1039404 provision.go:177] copyRemoteCerts
	I1018 13:27:05.072451 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:27:05.072499 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.091120 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.201403 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:27:05.222554 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:27:05.243981 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 13:27:05.262714 1039404 provision.go:87] duration metric: took 482.627838ms to configureAuth
	I1018 13:27:05.262742 1039404 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:27:05.262942 1039404 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:05.263062 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.282233 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:05.282567 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:05.282591 1039404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:27:05.616764 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:27:05.616839 1039404 machine.go:96] duration metric: took 4.372257578s to provisionDockerMachine
	I1018 13:27:05.616867 1039404 start.go:293] postStartSetup for "default-k8s-diff-port-208258" (driver="docker")
	I1018 13:27:05.616929 1039404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:27:05.617032 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:27:05.617105 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.636823 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.745542 1039404 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:27:05.749266 1039404 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:27:05.749295 1039404 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:27:05.749307 1039404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:27:05.749362 1039404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:27:05.749449 1039404 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:27:05.749559 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:27:05.758149 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:27:05.777083 1039404 start.go:296] duration metric: took 160.186537ms for postStartSetup
	I1018 13:27:05.777167 1039404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:27:05.777224 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.795383 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.901435 1039404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:27:05.906721 1039404 fix.go:56] duration metric: took 5.022556485s for fixHost
	I1018 13:27:05.906745 1039404 start.go:83] releasing machines lock for "default-k8s-diff-port-208258", held for 5.022608875s
	I1018 13:27:05.906812 1039404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:27:05.924341 1039404 ssh_runner.go:195] Run: cat /version.json
	I1018 13:27:05.924398 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.924402 1039404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:27:05.924465 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.944525 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.946948 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:06.059753 1039404 ssh_runner.go:195] Run: systemctl --version
	I1018 13:27:06.170452 1039404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:27:06.213204 1039404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:27:06.217886 1039404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:27:06.217961 1039404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:27:06.227427 1039404 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:27:06.227453 1039404 start.go:495] detecting cgroup driver to use...
	I1018 13:27:06.227517 1039404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:27:06.227592 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:27:06.245445 1039404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:27:06.259234 1039404 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:27:06.259296 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:27:06.275788 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:27:06.289957 1039404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:27:06.416174 1039404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:27:06.547019 1039404 docker.go:234] disabling docker service ...
	I1018 13:27:06.547130 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:27:06.562700 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:27:06.577850 1039404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:27:06.693960 1039404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:27:06.810970 1039404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:27:06.825982 1039404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:27:06.842063 1039404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:27:06.842182 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.852702 1039404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:27:06.852831 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.862476 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.871880 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.882381 1039404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:27:06.891789 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.900923 1039404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.909496 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.918956 1039404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:27:06.928064 1039404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:27:06.936703 1039404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:07.051230 1039404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:27:07.190509 1039404 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:27:07.190579 1039404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:27:07.196063 1039404 start.go:563] Will wait 60s for crictl version
	I1018 13:27:07.196129 1039404 ssh_runner.go:195] Run: which crictl
	I1018 13:27:07.200082 1039404 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:27:07.231577 1039404 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:27:07.231705 1039404 ssh_runner.go:195] Run: crio --version
	I1018 13:27:07.269127 1039404 ssh_runner.go:195] Run: crio --version
	I1018 13:27:07.303377 1039404 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:27:07.306224 1039404 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-208258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:27:07.322940 1039404 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:27:07.326727 1039404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:27:07.336881 1039404 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:27:07.337010 1039404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:07.337075 1039404 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:27:07.372723 1039404 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:27:07.372748 1039404 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:27:07.372832 1039404 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:27:07.403442 1039404 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:27:07.403467 1039404 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:27:07.403476 1039404 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 13:27:07.403576 1039404 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-208258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:27:07.403683 1039404 ssh_runner.go:195] Run: crio config
	I1018 13:27:07.456286 1039404 cni.go:84] Creating CNI manager for ""
	I1018 13:27:07.456312 1039404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:07.456329 1039404 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:27:07.456375 1039404 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-208258 NodeName:default-k8s-diff-port-208258 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:27:07.456552 1039404 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-208258"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:27:07.456633 1039404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:27:07.465256 1039404 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:27:07.465326 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:27:07.473051 1039404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 13:27:07.486285 1039404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:27:07.499766 1039404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 13:27:07.513488 1039404 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:27:07.517664 1039404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:27:07.527832 1039404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:07.655078 1039404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:27:07.673545 1039404 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258 for IP: 192.168.85.2
	I1018 13:27:07.673622 1039404 certs.go:195] generating shared ca certs ...
	I1018 13:27:07.673667 1039404 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:07.673865 1039404 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:27:07.673952 1039404 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:27:07.673992 1039404 certs.go:257] generating profile certs ...
	I1018 13:27:07.674126 1039404 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.key
	I1018 13:27:07.674237 1039404 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key.b8a2e090
	I1018 13:27:07.674314 1039404 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key
	I1018 13:27:07.674471 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:27:07.674532 1039404 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:27:07.674558 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:27:07.674616 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:27:07.674677 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:27:07.674753 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:27:07.674833 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:27:07.675516 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:27:07.698004 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:27:07.723745 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:27:07.745747 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:27:07.771165 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 13:27:07.805103 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 13:27:07.834164 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:27:07.857869 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:27:07.884556 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:27:07.905443 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:27:07.926596 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:27:07.946655 1039404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:27:07.963175 1039404 ssh_runner.go:195] Run: openssl version
	I1018 13:27:07.970500 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:27:07.981339 1039404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:07.985876 1039404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:07.985988 1039404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:08.032538 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:27:08.042320 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:27:08.051841 1039404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:27:08.056064 1039404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:27:08.056153 1039404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:27:08.098019 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:27:08.106602 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:27:08.115786 1039404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:27:08.120534 1039404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:27:08.120608 1039404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:27:08.164434 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:27:08.172726 1039404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:27:08.176914 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:27:08.219436 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:27:08.262995 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:27:08.305277 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:27:08.354045 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:27:08.423020 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:27:08.501993 1039404 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:08.502127 1039404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:27:08.502214 1039404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:27:08.587544 1039404 cri.go:89] found id: "76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18"
	I1018 13:27:08.587609 1039404 cri.go:89] found id: "3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed"
	I1018 13:27:08.587636 1039404 cri.go:89] found id: "97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329"
	I1018 13:27:08.587695 1039404 cri.go:89] found id: "037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744"
	I1018 13:27:08.587720 1039404 cri.go:89] found id: ""
	I1018 13:27:08.587799 1039404 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:27:08.624976 1039404 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:27:08Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:27:08.625100 1039404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:27:08.644087 1039404 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:27:08.644150 1039404 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:27:08.644215 1039404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:27:08.659394 1039404 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:27:08.660309 1039404 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-208258" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:08.660874 1039404 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-208258" cluster setting kubeconfig missing "default-k8s-diff-port-208258" context setting]
	I1018 13:27:08.661840 1039404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:08.664381 1039404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:27:08.680526 1039404 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 13:27:08.680601 1039404 kubeadm.go:601] duration metric: took 36.431566ms to restartPrimaryControlPlane
	I1018 13:27:08.680626 1039404 kubeadm.go:402] duration metric: took 178.647216ms to StartCluster
	I1018 13:27:08.680657 1039404 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:08.680737 1039404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:08.682260 1039404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:08.682830 1039404 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:08.682904 1039404 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:27:08.682960 1039404 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:27:08.683035 1039404 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-208258"
	I1018 13:27:08.683067 1039404 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-208258"
	W1018 13:27:08.683087 1039404 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:27:08.683121 1039404 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:27:08.683609 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.684171 1039404 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-208258"
	I1018 13:27:08.684196 1039404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-208258"
	I1018 13:27:08.684389 1039404 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-208258"
	I1018 13:27:08.684407 1039404 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-208258"
	W1018 13:27:08.684413 1039404 addons.go:247] addon dashboard should already be in state true
	I1018 13:27:08.684450 1039404 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:27:08.684473 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.684971 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.695112 1039404 out.go:179] * Verifying Kubernetes components...
	I1018 13:27:08.698426 1039404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:08.732899 1039404 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-208258"
	W1018 13:27:08.732922 1039404 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:27:08.732953 1039404 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:27:08.733393 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.759688 1039404 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:27:08.762730 1039404 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 13:27:08.762861 1039404 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:27:08.762872 1039404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:27:08.762931 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:08.775318 1039404 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:27:08.775340 1039404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:27:08.775403 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:08.778485 1039404 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 13:27:08.781408 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 13:27:08.781436 1039404 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 13:27:08.781504 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:08.814163 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:08.831847 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:08.843972 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:09.062126 1039404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:27:09.077350 1039404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:27:09.094933 1039404 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-208258" to be "Ready" ...
	I1018 13:27:09.118464 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 13:27:09.118496 1039404 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 13:27:09.186625 1039404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:27:09.198078 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 13:27:09.198116 1039404 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 13:27:09.280755 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:27:09.280792 1039404 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:27:09.296259 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:27:09.296307 1039404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:27:09.310934 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:27:09.310971 1039404 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:27:09.378376 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:27:09.378404 1039404 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:27:09.407519 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:27:09.407560 1039404 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:27:09.429475 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:27:09.429502 1039404 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:27:09.454221 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:27:09.454248 1039404 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:27:09.481993 1039404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:27:13.052821 1039404 node_ready.go:49] node "default-k8s-diff-port-208258" is "Ready"
	I1018 13:27:13.052856 1039404 node_ready.go:38] duration metric: took 3.957883023s for node "default-k8s-diff-port-208258" to be "Ready" ...
	I1018 13:27:13.052870 1039404 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:27:13.052933 1039404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:27:15.198345 1039404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.120958664s)
	I1018 13:27:15.198398 1039404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.011749392s)
	I1018 13:27:15.256327 1039404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.774276715s)
	I1018 13:27:15.256609 1039404 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.203660928s)
	I1018 13:27:15.256630 1039404 api_server.go:72] duration metric: took 6.57368792s to wait for apiserver process to appear ...
	I1018 13:27:15.256636 1039404 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:27:15.256654 1039404 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 13:27:15.259738 1039404 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-208258 addons enable metrics-server
	
	I1018 13:27:15.262743 1039404 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 13:27:15.266483 1039404 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:27:15.266568 1039404 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:27:15.266705 1039404 addons.go:514] duration metric: took 6.583733428s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	
	
	==> CRI-O <==
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.147821535Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c4cc454-fba4-4218-aaff-503d58fc3b87 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.152471485Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6e5a218b-81cf-4f54-97a8-c58d9d6b59e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.152736481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.161806947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.162005193Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a7c1822424f6ffe6c4201b57c1d18fdc34d21d6b0e27127567ffd9537cb770fd/merged/etc/passwd: no such file or directory"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.162030186Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a7c1822424f6ffe6c4201b57c1d18fdc34d21d6b0e27127567ffd9537cb770fd/merged/etc/group: no such file or directory"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.162331137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.177148469Z" level=info msg="Created container 79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287: kube-system/storage-provisioner/storage-provisioner" id=6e5a218b-81cf-4f54-97a8-c58d9d6b59e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.178210061Z" level=info msg="Starting container: 79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287" id=362f467c-3121-478f-83ef-b3b584c59745 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.184936286Z" level=info msg="Started container" PID=1652 containerID=79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287 description=kube-system/storage-provisioner/storage-provisioner id=362f467c-3121-478f-83ef-b3b584c59745 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a03cf16aaaaa8d9fd87030367e1d05c590be658f47cc12f7113bf69e3573c42
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.914516675Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.921079328Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.921340148Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.9214676Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.926149099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.926358102Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.92651497Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.937918714Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.938103389Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.938201236Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.956017167Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.956247824Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.9564436Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.968046871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.968245913Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	79177034fd251       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   5a03cf16aaaaa       storage-provisioner                          kube-system
	8efcbcc6f4ab9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   2088bc643918f       dashboard-metrics-scraper-6ffb444bf9-cmlx5   kubernetes-dashboard
	2b8230f4d1bb2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   00cfde4abbb8b       kubernetes-dashboard-855c9754f9-vk5gp        kubernetes-dashboard
	d56bd96894eec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   ba2fbf86a2527       coredns-66bc5c9577-ch4qs                     kube-system
	f68cee42722d9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   90a3500b89b23       busybox                                      default
	01dfb2bcdca8f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   c70b35f966053       kube-proxy-vqgcc                             kube-system
	3e01b60163312       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   5a03cf16aaaaa       storage-provisioner                          kube-system
	deb79053d475c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   b7ef3e95f0e66       kindnet-zvmhf                                kube-system
	a43c33d591b5a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   cb9bb9f3448ff       kube-scheduler-embed-certs-774829            kube-system
	7920a44c552e4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   62f60a9ab065a       kube-apiserver-embed-certs-774829            kube-system
	fa361f5a5688b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   7765ddae1aaf1       kube-controller-manager-embed-certs-774829   kube-system
	c9201764369f4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   1744e1b8db205       etcd-embed-certs-774829                      kube-system
	
	
	==> coredns [d56bd96894eec0d3969c06a8d8d1d0bf5187a978e2c0b7959b860634b1d1353a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40479 - 7349 "HINFO IN 9166643506441686086.2652544372881032022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020819694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-774829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-774829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=embed-certs-774829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-774829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:27:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:25:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-774829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bbac08b8-1da7-4bdc-9a1e-0df1153ffa18
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-ch4qs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-774829                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-zvmhf                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-774829             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-774829    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-vqgcc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-774829             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cmlx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vk5gp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-774829 event: Registered Node embed-certs-774829 in Controller
	  Normal   NodeReady                97s                    kubelet          Node embed-certs-774829 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 60s)      kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 60s)      kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 60s)      kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node embed-certs-774829 event: Registered Node embed-certs-774829 in Controller
	
	
	==> dmesg <==
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c9201764369f43ee1bb5e0a3d7d47a5bff8966959e69a3db59c9b1d1b71735b1] <==
	{"level":"warn","ts":"2025-10-18T13:26:21.239130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.265811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.284704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.298222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.315845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.363451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.370197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.389171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.450586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.456916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.479712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.526317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.570945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.607511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.631883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.671970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.690015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.733631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.763102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.779953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.817150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.907698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.918542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.941181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:22.109356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:27:17 up  5:09,  0 user,  load average: 2.80, 2.84, 2.47
	Linux embed-certs-774829 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [deb79053d475ceade5869b7a5c80b59e86ff337adc487a96c4db827d88d518dd] <==
	I1018 13:26:24.636749       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:26:24.637229       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:26:24.637412       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:26:24.707928       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:26:24.708058       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:26:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:26:24.912926       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:26:24.912945       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:26:24.912953       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:26:24.913668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:26:54.913270       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:26:54.913391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:26:54.913305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:26:54.914715       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:26:56.513398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:26:56.513426       1 metrics.go:72] Registering metrics
	I1018 13:26:56.513489       1 controller.go:711] "Syncing nftables rules"
	I1018 13:27:04.912836       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:27:04.912968       1 main.go:301] handling current node
	I1018 13:27:14.917298       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:27:14.917332       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7920a44c552e4c5e2ad627678ddd2e1ca5f62a7398b052140a83b7d76c068d6e] <==
	I1018 13:26:23.506522       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:26:23.509669       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:26:23.509764       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:26:23.509784       1 policy_source.go:240] refreshing policies
	I1018 13:26:23.511491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 13:26:23.519447       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:26:23.520581       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 13:26:23.520633       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:26:23.520670       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:26:23.525633       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:26:23.525655       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:26:23.525662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:26:23.525669       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:26:23.551749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:26:24.028994       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:26:24.077866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:26:24.650310       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:26:24.881589       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:26:24.944894       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:26:24.969745       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:26:25.275152       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.180.39"}
	I1018 13:26:25.298737       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.112.4"}
	I1018 13:26:27.924006       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:26:28.024563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:26:28.076305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fa361f5a5688b380f5f99d0c7c6b08eba214e61325f08a0579323568e2dc4974] <==
	I1018 13:26:27.530961       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:26:27.531015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:26:27.531044       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:26:27.531066       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:26:27.531072       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:26:27.534651       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:26:27.536019       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:26:27.539295       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:26:27.539303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:26:27.546499       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 13:26:27.547765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:26:27.547787       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:26:27.547795       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:26:27.549674       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 13:26:27.552483       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:26:27.556145       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:26:27.556323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:26:27.557634       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:26:27.568423       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:26:27.568425       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:26:27.568452       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 13:26:27.568472       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 13:26:27.568482       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 13:26:27.572701       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:26:27.579908       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [01dfb2bcdca8f72f569ed8490d352da5859334740e43e096120f437c0d4ad559] <==
	I1018 13:26:24.764435       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:26:25.418946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:26:25.522516       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:26:25.522622       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:26:25.522722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:26:25.585043       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:26:25.585176       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:26:25.589819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:26:25.590432       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:26:25.590960       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:26:25.592506       1 config.go:200] "Starting service config controller"
	I1018 13:26:25.592597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:26:25.592653       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:26:25.592693       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:26:25.592725       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:26:25.592766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:26:25.597337       1 config.go:309] "Starting node config controller"
	I1018 13:26:25.597436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:26:25.597469       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:26:25.694699       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:26:25.697216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:26:25.697269       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a43c33d591b5ab9bb0ab2cf0448a86a485b202dc1d02bb68cae0cb40cd379794] <==
	I1018 13:26:24.068583       1 serving.go:386] Generated self-signed cert in-memory
	I1018 13:26:25.672739       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:26:25.672776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:26:25.678731       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 13:26:25.678841       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 13:26:25.678906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:26:25.678943       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:26:25.678986       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:26:25.679015       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:26:25.679442       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:26:25.679567       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:26:25.779264       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:26:25.779391       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 13:26:25.779515       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:26:28 embed-certs-774829 kubelet[778]: I1018 13:26:28.260432     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6kg\" (UniqueName: \"kubernetes.io/projected/03a1777f-b7cc-407d-9621-3fa0e485871b-kube-api-access-bf6kg\") pod \"kubernetes-dashboard-855c9754f9-vk5gp\" (UID: \"03a1777f-b7cc-407d-9621-3fa0e485871b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vk5gp"
	Oct 18 13:26:28 embed-certs-774829 kubelet[778]: W1018 13:26:28.519088     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/crio-2088bc643918fdf1c6afdf2169cf9d8b541ecc91e940a70a5ccadf712b3b52f4 WatchSource:0}: Error finding container 2088bc643918fdf1c6afdf2169cf9d8b541ecc91e940a70a5ccadf712b3b52f4: Status 404 returned error can't find the container with id 2088bc643918fdf1c6afdf2169cf9d8b541ecc91e940a70a5ccadf712b3b52f4
	Oct 18 13:26:28 embed-certs-774829 kubelet[778]: W1018 13:26:28.529328     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/crio-00cfde4abbb8bedd7f01497250b4e0ea4f8ad892820b0a81ea54b0d0d7396368 WatchSource:0}: Error finding container 00cfde4abbb8bedd7f01497250b4e0ea4f8ad892820b0a81ea54b0d0d7396368: Status 404 returned error can't find the container with id 00cfde4abbb8bedd7f01497250b4e0ea4f8ad892820b0a81ea54b0d0d7396368
	Oct 18 13:26:30 embed-certs-774829 kubelet[778]: I1018 13:26:30.159585     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 13:26:33 embed-certs-774829 kubelet[778]: I1018 13:26:33.045366     778 scope.go:117] "RemoveContainer" containerID="5131c6c9695e711b196ca339b5992b2c1e09086117d9b0f783e724eb9734a848"
	Oct 18 13:26:34 embed-certs-774829 kubelet[778]: I1018 13:26:34.052095     778 scope.go:117] "RemoveContainer" containerID="5131c6c9695e711b196ca339b5992b2c1e09086117d9b0f783e724eb9734a848"
	Oct 18 13:26:34 embed-certs-774829 kubelet[778]: I1018 13:26:34.057190     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:34 embed-certs-774829 kubelet[778]: E1018 13:26:34.057465     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:35 embed-certs-774829 kubelet[778]: I1018 13:26:35.070580     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:35 embed-certs-774829 kubelet[778]: E1018 13:26:35.070737     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:38 embed-certs-774829 kubelet[778]: I1018 13:26:38.484716     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:38 embed-certs-774829 kubelet[778]: E1018 13:26:38.484912     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:39 embed-certs-774829 kubelet[778]: I1018 13:26:39.119338     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vk5gp" podStartSLOduration=1.26627416 podStartE2EDuration="11.119314546s" podCreationTimestamp="2025-10-18 13:26:28 +0000 UTC" firstStartedPulling="2025-10-18 13:26:28.533309626 +0000 UTC m=+10.856640776" lastFinishedPulling="2025-10-18 13:26:38.386350013 +0000 UTC m=+20.709681162" observedRunningTime="2025-10-18 13:26:39.118445408 +0000 UTC m=+21.441776590" watchObservedRunningTime="2025-10-18 13:26:39.119314546 +0000 UTC m=+21.442645696"
	Oct 18 13:26:49 embed-certs-774829 kubelet[778]: I1018 13:26:49.910614     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:50 embed-certs-774829 kubelet[778]: I1018 13:26:50.130421     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:51 embed-certs-774829 kubelet[778]: I1018 13:26:51.134771     778 scope.go:117] "RemoveContainer" containerID="8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	Oct 18 13:26:51 embed-certs-774829 kubelet[778]: E1018 13:26:51.134934     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:55 embed-certs-774829 kubelet[778]: I1018 13:26:55.145969     778 scope.go:117] "RemoveContainer" containerID="3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92"
	Oct 18 13:26:58 embed-certs-774829 kubelet[778]: I1018 13:26:58.484043     778 scope.go:117] "RemoveContainer" containerID="8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	Oct 18 13:26:58 embed-certs-774829 kubelet[778]: E1018 13:26:58.484822     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:27:08 embed-certs-774829 kubelet[778]: I1018 13:27:08.910666     778 scope.go:117] "RemoveContainer" containerID="8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	Oct 18 13:27:08 embed-certs-774829 kubelet[778]: E1018 13:27:08.911304     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:27:14 embed-certs-774829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:27:14 embed-certs-774829 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:27:14 embed-certs-774829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2b8230f4d1bb2af92d33d63d23eda6f397401cdbcc30e1fe9bcc5378a56e47d5] <==
	2025/10/18 13:26:38 Starting overwatch
	2025/10/18 13:26:38 Using namespace: kubernetes-dashboard
	2025/10/18 13:26:38 Using in-cluster config to connect to apiserver
	2025/10/18 13:26:38 Using secret token for csrf signing
	2025/10/18 13:26:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:26:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:26:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 13:26:38 Generating JWE encryption key
	2025/10/18 13:26:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:26:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:26:38 Initializing JWE encryption key from synchronized object
	2025/10/18 13:26:38 Creating in-cluster Sidecar client
	2025/10/18 13:26:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:26:38 Serving insecurely on HTTP port: 9090
	2025/10/18 13:27:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92] <==
	I1018 13:26:24.586799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:26:54.589084       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287] <==
	I1018 13:26:55.195033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:26:55.208507       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:26:55.208627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:26:55.211783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:58.666617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:02.926344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:06.525543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:09.579888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:12.602411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:12.607805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:27:12.608029       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:27:12.608220       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-774829_b58f3281-3237-4631-8b2a-3d92bee98ae7!
	I1018 13:27:12.609455       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e819a57-5518-4431-a3ad-90de48f83d9c", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-774829_b58f3281-3237-4631-8b2a-3d92bee98ae7 became leader
	W1018 13:27:12.615348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:12.627496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:27:12.708658       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-774829_b58f3281-3237-4631-8b2a-3d92bee98ae7!
	W1018 13:27:14.631066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:14.638681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:16.642211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:16.646632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-774829 -n embed-certs-774829
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-774829 -n embed-certs-774829: exit status 2 (500.555177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-774829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-774829
helpers_test.go:243: (dbg) docker inspect embed-certs-774829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5",
	        "Created": "2025-10-18T13:24:26.79427098Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1036568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:26:09.778608724Z",
	            "FinishedAt": "2025-10-18T13:26:08.920688126Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/hosts",
	        "LogPath": "/var/lib/docker/containers/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5-json.log",
	        "Name": "/embed-certs-774829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-774829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-774829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5",
	                "LowerDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0319120ef80c397381816d661e23c840078e11159d00ca4447688dd95292b1df/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-774829",
	                "Source": "/var/lib/docker/volumes/embed-certs-774829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-774829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-774829",
	                "name.minikube.sigs.k8s.io": "embed-certs-774829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d535ea2ce2f63f81bac12e38f8c956ad12e74300f2ada4b54ad2ea62a0a41d48",
	            "SandboxKey": "/var/run/docker/netns/d535ea2ce2f6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34187"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34188"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34189"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-774829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:7d:ce:0e:91:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e311031c6dc9b74f7ff8e4ce1a369f0cc1a288a1b5c06ece89bfc9abebacd083",
	                    "EndpointID": "e83eb546e467834b57a259101d9b8547098ad5b78c7c78261132188b2cdafa6f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-774829",
	                        "43d79c77c4e3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829: exit status 2 (416.296616ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-774829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-774829 logs -n 25: (1.761902888s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-460322                                                                                                                                                │ old-k8s-version-460322       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:22 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:22 UTC │ 18 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p no-preload-779884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │                     │
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                              │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                               │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                              │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                     │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                     │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                          │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                             │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                              │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                             │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:27:00
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:27:00.645184 1039404 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:27:00.645346 1039404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:00.645358 1039404 out.go:374] Setting ErrFile to fd 2...
	I1018 13:27:00.645363 1039404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:00.645641 1039404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:27:00.646030 1039404 out.go:368] Setting JSON to false
	I1018 13:27:00.647004 1039404 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18573,"bootTime":1760775448,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:27:00.647072 1039404 start.go:141] virtualization:  
	I1018 13:27:00.650750 1039404 out.go:179] * [default-k8s-diff-port-208258] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:27:00.654640 1039404 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:27:00.654781 1039404 notify.go:220] Checking for updates...
	I1018 13:27:00.660624 1039404 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:27:00.663526 1039404 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:00.666499 1039404 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:27:00.669303 1039404 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:27:00.672221 1039404 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:27:00.675789 1039404 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:00.676408 1039404 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:27:00.708620 1039404 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:27:00.708742 1039404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:00.765901 1039404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:27:00.755186267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:00.766722 1039404 docker.go:318] overlay module found
	I1018 13:27:00.769872 1039404 out.go:179] * Using the docker driver based on existing profile
	I1018 13:27:00.772709 1039404 start.go:305] selected driver: docker
	I1018 13:27:00.772729 1039404 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:00.772835 1039404 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:27:00.773601 1039404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:00.850845 1039404 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:27:00.83695915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:00.851198 1039404 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:27:00.851226 1039404 cni.go:84] Creating CNI manager for ""
	I1018 13:27:00.851283 1039404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:00.851324 1039404 start.go:349] cluster config:
	{Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:00.856314 1039404 out.go:179] * Starting "default-k8s-diff-port-208258" primary control-plane node in "default-k8s-diff-port-208258" cluster
	I1018 13:27:00.859090 1039404 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:27:00.861956 1039404 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:27:00.864738 1039404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:00.864793 1039404 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:27:00.864807 1039404 cache.go:58] Caching tarball of preloaded images
	I1018 13:27:00.864819 1039404 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:27:00.864904 1039404 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:27:00.864914 1039404 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:27:00.865017 1039404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json ...
	I1018 13:27:00.883990 1039404 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:27:00.884014 1039404 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:27:00.884032 1039404 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:27:00.884060 1039404 start.go:360] acquireMachinesLock for default-k8s-diff-port-208258: {Name:mk1489085c407b0af704e7c70968afb6ecaa3acb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:27:00.884124 1039404 start.go:364] duration metric: took 39.532µs to acquireMachinesLock for "default-k8s-diff-port-208258"
	I1018 13:27:00.884148 1039404 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:27:00.884157 1039404 fix.go:54] fixHost starting: 
	I1018 13:27:00.884436 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:00.901260 1039404 fix.go:112] recreateIfNeeded on default-k8s-diff-port-208258: state=Stopped err=<nil>
	W1018 13:27:00.901292 1039404 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 13:26:59.868551 1036440 pod_ready.go:104] pod "coredns-66bc5c9577-ch4qs" is not "Ready", error: <nil>
	I1018 13:27:00.382004 1036440 pod_ready.go:94] pod "coredns-66bc5c9577-ch4qs" is "Ready"
	I1018 13:27:00.382030 1036440 pod_ready.go:86] duration metric: took 34.522844504s for pod "coredns-66bc5c9577-ch4qs" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.393041 1036440 pod_ready.go:83] waiting for pod "etcd-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.406654 1036440 pod_ready.go:94] pod "etcd-embed-certs-774829" is "Ready"
	I1018 13:27:00.406686 1036440 pod_ready.go:86] duration metric: took 13.618506ms for pod "etcd-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.410125 1036440 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.418753 1036440 pod_ready.go:94] pod "kube-apiserver-embed-certs-774829" is "Ready"
	I1018 13:27:00.418789 1036440 pod_ready.go:86] duration metric: took 8.632873ms for pod "kube-apiserver-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.422466 1036440 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.567477 1036440 pod_ready.go:94] pod "kube-controller-manager-embed-certs-774829" is "Ready"
	I1018 13:27:00.567510 1036440 pod_ready.go:86] duration metric: took 144.897639ms for pod "kube-controller-manager-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:00.766682 1036440 pod_ready.go:83] waiting for pod "kube-proxy-vqgcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.167045 1036440 pod_ready.go:94] pod "kube-proxy-vqgcc" is "Ready"
	I1018 13:27:01.167072 1036440 pod_ready.go:86] duration metric: took 400.372503ms for pod "kube-proxy-vqgcc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.367964 1036440 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.766510 1036440 pod_ready.go:94] pod "kube-scheduler-embed-certs-774829" is "Ready"
	I1018 13:27:01.766541 1036440 pod_ready.go:86] duration metric: took 398.546676ms for pod "kube-scheduler-embed-certs-774829" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:01.766553 1036440 pod_ready.go:40] duration metric: took 35.912807172s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:27:01.827946 1036440 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:27:01.831646 1036440 out.go:179] * Done! kubectl is now configured to use "embed-certs-774829" cluster and "default" namespace by default
	I1018 13:27:00.904565 1039404 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-208258" ...
	I1018 13:27:00.904666 1039404 cli_runner.go:164] Run: docker start default-k8s-diff-port-208258
	I1018 13:27:01.186896 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:01.212788 1039404 kic.go:430] container "default-k8s-diff-port-208258" state is running.
	I1018 13:27:01.213191 1039404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:27:01.244331 1039404 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/config.json ...
	I1018 13:27:01.244571 1039404 machine.go:93] provisionDockerMachine start ...
	I1018 13:27:01.244630 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:01.269127 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:01.269443 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:01.269455 1039404 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:27:01.270528 1039404 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:27:04.419554 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-208258
	
	I1018 13:27:04.419580 1039404 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-208258"
	I1018 13:27:04.419683 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:04.441288 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:04.441642 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:04.441672 1039404 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-208258 && echo "default-k8s-diff-port-208258" | sudo tee /etc/hostname
	I1018 13:27:04.607234 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-208258
	
	I1018 13:27:04.607339 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:04.626445 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:04.626786 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:04.626810 1039404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-208258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-208258/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-208258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:27:04.779973 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:27:04.780017 1039404 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:27:04.780046 1039404 ubuntu.go:190] setting up certificates
	I1018 13:27:04.780060 1039404 provision.go:84] configureAuth start
	I1018 13:27:04.780123 1039404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:27:04.800059 1039404 provision.go:143] copyHostCerts
	I1018 13:27:04.800141 1039404 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:27:04.800158 1039404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:27:04.800244 1039404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:27:04.800368 1039404 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:27:04.800381 1039404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:27:04.800417 1039404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:27:04.800487 1039404 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:27:04.800495 1039404 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:27:04.800522 1039404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:27:04.800586 1039404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-208258 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-208258 localhost minikube]
	I1018 13:27:05.072368 1039404 provision.go:177] copyRemoteCerts
	I1018 13:27:05.072451 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:27:05.072499 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.091120 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.201403 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:27:05.222554 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:27:05.243981 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 13:27:05.262714 1039404 provision.go:87] duration metric: took 482.627838ms to configureAuth
	I1018 13:27:05.262742 1039404 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:27:05.262942 1039404 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:05.263062 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.282233 1039404 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:05.282567 1039404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34192 <nil> <nil>}
	I1018 13:27:05.282591 1039404 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:27:05.616764 1039404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:27:05.616839 1039404 machine.go:96] duration metric: took 4.372257578s to provisionDockerMachine
	I1018 13:27:05.616867 1039404 start.go:293] postStartSetup for "default-k8s-diff-port-208258" (driver="docker")
	I1018 13:27:05.616929 1039404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:27:05.617032 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:27:05.617105 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.636823 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.745542 1039404 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:27:05.749266 1039404 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:27:05.749295 1039404 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:27:05.749307 1039404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:27:05.749362 1039404 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:27:05.749449 1039404 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:27:05.749559 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:27:05.758149 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:27:05.777083 1039404 start.go:296] duration metric: took 160.186537ms for postStartSetup
	I1018 13:27:05.777167 1039404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:27:05.777224 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.795383 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.901435 1039404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:27:05.906721 1039404 fix.go:56] duration metric: took 5.022556485s for fixHost
	I1018 13:27:05.906745 1039404 start.go:83] releasing machines lock for "default-k8s-diff-port-208258", held for 5.022608875s
	I1018 13:27:05.906812 1039404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-208258
	I1018 13:27:05.924341 1039404 ssh_runner.go:195] Run: cat /version.json
	I1018 13:27:05.924398 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.924402 1039404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:27:05.924465 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:05.944525 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:05.946948 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:06.059753 1039404 ssh_runner.go:195] Run: systemctl --version
	I1018 13:27:06.170452 1039404 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:27:06.213204 1039404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:27:06.217886 1039404 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:27:06.217961 1039404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:27:06.227427 1039404 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 13:27:06.227453 1039404 start.go:495] detecting cgroup driver to use...
	I1018 13:27:06.227517 1039404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:27:06.227592 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:27:06.245445 1039404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:27:06.259234 1039404 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:27:06.259296 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:27:06.275788 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:27:06.289957 1039404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:27:06.416174 1039404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:27:06.547019 1039404 docker.go:234] disabling docker service ...
	I1018 13:27:06.547130 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:27:06.562700 1039404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:27:06.577850 1039404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:27:06.693960 1039404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:27:06.810970 1039404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:27:06.825982 1039404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:27:06.842063 1039404 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:27:06.842182 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.852702 1039404 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:27:06.852831 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.862476 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.871880 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.882381 1039404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:27:06.891789 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.900923 1039404 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.909496 1039404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:06.918956 1039404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:27:06.928064 1039404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:27:06.936703 1039404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:07.051230 1039404 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:27:07.190509 1039404 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:27:07.190579 1039404 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:27:07.196063 1039404 start.go:563] Will wait 60s for crictl version
	I1018 13:27:07.196129 1039404 ssh_runner.go:195] Run: which crictl
	I1018 13:27:07.200082 1039404 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:27:07.231577 1039404 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:27:07.231705 1039404 ssh_runner.go:195] Run: crio --version
	I1018 13:27:07.269127 1039404 ssh_runner.go:195] Run: crio --version
	I1018 13:27:07.303377 1039404 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:27:07.306224 1039404 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-208258 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:27:07.322940 1039404 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 13:27:07.326727 1039404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:27:07.336881 1039404 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:27:07.337010 1039404 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:07.337075 1039404 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:27:07.372723 1039404 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:27:07.372748 1039404 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:27:07.372832 1039404 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:27:07.403442 1039404 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:27:07.403467 1039404 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:27:07.403476 1039404 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 13:27:07.403576 1039404 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-208258 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:27:07.403683 1039404 ssh_runner.go:195] Run: crio config
	I1018 13:27:07.456286 1039404 cni.go:84] Creating CNI manager for ""
	I1018 13:27:07.456312 1039404 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:07.456329 1039404 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 13:27:07.456375 1039404 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-208258 NodeName:default-k8s-diff-port-208258 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:27:07.456552 1039404 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-208258"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:27:07.456633 1039404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:27:07.465256 1039404 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:27:07.465326 1039404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:27:07.473051 1039404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 13:27:07.486285 1039404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:27:07.499766 1039404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 13:27:07.513488 1039404 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:27:07.517664 1039404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:27:07.527832 1039404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:07.655078 1039404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:27:07.673545 1039404 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258 for IP: 192.168.85.2
	I1018 13:27:07.673622 1039404 certs.go:195] generating shared ca certs ...
	I1018 13:27:07.673667 1039404 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:07.673865 1039404 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:27:07.673952 1039404 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:27:07.673992 1039404 certs.go:257] generating profile certs ...
	I1018 13:27:07.674126 1039404 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.key
	I1018 13:27:07.674237 1039404 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key.b8a2e090
	I1018 13:27:07.674314 1039404 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key
	I1018 13:27:07.674471 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:27:07.674532 1039404 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:27:07.674558 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:27:07.674616 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:27:07.674677 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:27:07.674753 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:27:07.674833 1039404 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:27:07.675516 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:27:07.698004 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:27:07.723745 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:27:07.745747 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:27:07.771165 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 13:27:07.805103 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 13:27:07.834164 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:27:07.857869 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:27:07.884556 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:27:07.905443 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:27:07.926596 1039404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:27:07.946655 1039404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:27:07.963175 1039404 ssh_runner.go:195] Run: openssl version
	I1018 13:27:07.970500 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:27:07.981339 1039404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:07.985876 1039404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:07.985988 1039404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:08.032538 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:27:08.042320 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:27:08.051841 1039404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:27:08.056064 1039404 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:27:08.056153 1039404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:27:08.098019 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:27:08.106602 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:27:08.115786 1039404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:27:08.120534 1039404 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:27:08.120608 1039404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:27:08.164434 1039404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:27:08.172726 1039404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:27:08.176914 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 13:27:08.219436 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 13:27:08.262995 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 13:27:08.305277 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 13:27:08.354045 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 13:27:08.423020 1039404 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 13:27:08.501993 1039404 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-208258 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-208258 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:08.502127 1039404 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:27:08.502214 1039404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:27:08.587544 1039404 cri.go:89] found id: "76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18"
	I1018 13:27:08.587609 1039404 cri.go:89] found id: "3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed"
	I1018 13:27:08.587636 1039404 cri.go:89] found id: "97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329"
	I1018 13:27:08.587695 1039404 cri.go:89] found id: "037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744"
	I1018 13:27:08.587720 1039404 cri.go:89] found id: ""
	I1018 13:27:08.587799 1039404 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 13:27:08.624976 1039404 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:27:08Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:27:08.625100 1039404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:27:08.644087 1039404 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 13:27:08.644150 1039404 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 13:27:08.644215 1039404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 13:27:08.659394 1039404 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 13:27:08.660309 1039404 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-208258" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:08.660874 1039404 kubeconfig.go:62] /home/jenkins/minikube-integration/21647-834184/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-208258" cluster setting kubeconfig missing "default-k8s-diff-port-208258" context setting]
	I1018 13:27:08.661840 1039404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:08.664381 1039404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 13:27:08.680526 1039404 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 13:27:08.680601 1039404 kubeadm.go:601] duration metric: took 36.431566ms to restartPrimaryControlPlane
	I1018 13:27:08.680626 1039404 kubeadm.go:402] duration metric: took 178.647216ms to StartCluster
	I1018 13:27:08.680657 1039404 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:08.680737 1039404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:08.682260 1039404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:08.682830 1039404 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:08.682904 1039404 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:27:08.682960 1039404 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:27:08.683035 1039404 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-208258"
	I1018 13:27:08.683067 1039404 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-208258"
	W1018 13:27:08.683087 1039404 addons.go:247] addon storage-provisioner should already be in state true
	I1018 13:27:08.683121 1039404 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:27:08.683609 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.684171 1039404 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-208258"
	I1018 13:27:08.684196 1039404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-208258"
	I1018 13:27:08.684389 1039404 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-208258"
	I1018 13:27:08.684407 1039404 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-208258"
	W1018 13:27:08.684413 1039404 addons.go:247] addon dashboard should already be in state true
	I1018 13:27:08.684450 1039404 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:27:08.684473 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.684971 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.695112 1039404 out.go:179] * Verifying Kubernetes components...
	I1018 13:27:08.698426 1039404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:08.732899 1039404 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-208258"
	W1018 13:27:08.732922 1039404 addons.go:247] addon default-storageclass should already be in state true
	I1018 13:27:08.732953 1039404 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:27:08.733393 1039404 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:27:08.759688 1039404 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:27:08.762730 1039404 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 13:27:08.762861 1039404 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:27:08.762872 1039404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:27:08.762931 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:08.775318 1039404 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:27:08.775340 1039404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:27:08.775403 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:08.778485 1039404 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 13:27:08.781408 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 13:27:08.781436 1039404 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 13:27:08.781504 1039404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:27:08.814163 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:08.831847 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:08.843972 1039404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:27:09.062126 1039404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:27:09.077350 1039404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:27:09.094933 1039404 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-208258" to be "Ready" ...
	I1018 13:27:09.118464 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 13:27:09.118496 1039404 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 13:27:09.186625 1039404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:27:09.198078 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 13:27:09.198116 1039404 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 13:27:09.280755 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:27:09.280792 1039404 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:27:09.296259 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:27:09.296307 1039404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:27:09.310934 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:27:09.310971 1039404 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:27:09.378376 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:27:09.378404 1039404 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:27:09.407519 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:27:09.407560 1039404 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:27:09.429475 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:27:09.429502 1039404 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:27:09.454221 1039404 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:27:09.454248 1039404 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:27:09.481993 1039404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:27:13.052821 1039404 node_ready.go:49] node "default-k8s-diff-port-208258" is "Ready"
	I1018 13:27:13.052856 1039404 node_ready.go:38] duration metric: took 3.957883023s for node "default-k8s-diff-port-208258" to be "Ready" ...
	I1018 13:27:13.052870 1039404 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:27:13.052933 1039404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:27:15.198345 1039404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.120958664s)
	I1018 13:27:15.198398 1039404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.011749392s)
	I1018 13:27:15.256327 1039404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.774276715s)
	I1018 13:27:15.256609 1039404 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.203660928s)
	I1018 13:27:15.256630 1039404 api_server.go:72] duration metric: took 6.57368792s to wait for apiserver process to appear ...
	I1018 13:27:15.256636 1039404 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:27:15.256654 1039404 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 13:27:15.259738 1039404 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-208258 addons enable metrics-server
	
	I1018 13:27:15.262743 1039404 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 13:27:15.266483 1039404 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:27:15.266568 1039404 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:27:15.266705 1039404 addons.go:514] duration metric: took 6.583733428s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	
	
	==> CRI-O <==
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.147821535Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c4cc454-fba4-4218-aaff-503d58fc3b87 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.152471485Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6e5a218b-81cf-4f54-97a8-c58d9d6b59e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.152736481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.161806947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.162005193Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a7c1822424f6ffe6c4201b57c1d18fdc34d21d6b0e27127567ffd9537cb770fd/merged/etc/passwd: no such file or directory"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.162030186Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a7c1822424f6ffe6c4201b57c1d18fdc34d21d6b0e27127567ffd9537cb770fd/merged/etc/group: no such file or directory"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.162331137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.177148469Z" level=info msg="Created container 79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287: kube-system/storage-provisioner/storage-provisioner" id=6e5a218b-81cf-4f54-97a8-c58d9d6b59e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.178210061Z" level=info msg="Starting container: 79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287" id=362f467c-3121-478f-83ef-b3b584c59745 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:26:55 embed-certs-774829 crio[649]: time="2025-10-18T13:26:55.184936286Z" level=info msg="Started container" PID=1652 containerID=79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287 description=kube-system/storage-provisioner/storage-provisioner id=362f467c-3121-478f-83ef-b3b584c59745 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a03cf16aaaaa8d9fd87030367e1d05c590be658f47cc12f7113bf69e3573c42
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.914516675Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.921079328Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.921340148Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.9214676Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.926149099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.926358102Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.92651497Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.937918714Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.938103389Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.938201236Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.956017167Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.956247824Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.9564436Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.968046871Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:04 embed-certs-774829 crio[649]: time="2025-10-18T13:27:04.968245913Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	79177034fd251       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   5a03cf16aaaaa       storage-provisioner                          kube-system
	8efcbcc6f4ab9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   2088bc643918f       dashboard-metrics-scraper-6ffb444bf9-cmlx5   kubernetes-dashboard
	2b8230f4d1bb2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   00cfde4abbb8b       kubernetes-dashboard-855c9754f9-vk5gp        kubernetes-dashboard
	d56bd96894eec       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   ba2fbf86a2527       coredns-66bc5c9577-ch4qs                     kube-system
	f68cee42722d9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   90a3500b89b23       busybox                                      default
	01dfb2bcdca8f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   c70b35f966053       kube-proxy-vqgcc                             kube-system
	3e01b60163312       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   5a03cf16aaaaa       storage-provisioner                          kube-system
	deb79053d475c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   b7ef3e95f0e66       kindnet-zvmhf                                kube-system
	a43c33d591b5a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   cb9bb9f3448ff       kube-scheduler-embed-certs-774829            kube-system
	7920a44c552e4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   62f60a9ab065a       kube-apiserver-embed-certs-774829            kube-system
	fa361f5a5688b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7765ddae1aaf1       kube-controller-manager-embed-certs-774829   kube-system
	c9201764369f4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1744e1b8db205       etcd-embed-certs-774829                      kube-system
	
	
	==> coredns [d56bd96894eec0d3969c06a8d8d1d0bf5187a978e2c0b7959b860634b1d1353a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40479 - 7349 "HINFO IN 9166643506441686086.2652544372881032022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020819694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-774829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-774829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=embed-certs-774829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:24:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-774829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:27:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:24:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:26:54 +0000   Sat, 18 Oct 2025 13:25:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-774829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bbac08b8-1da7-4bdc-9a1e-0df1153ffa18
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-ch4qs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-774829                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-zvmhf                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-774829             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-774829    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-vqgcc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-774829             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cmlx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vk5gp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s                  node-controller  Node embed-certs-774829 event: Registered Node embed-certs-774829 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-774829 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)      kubelet          Node embed-certs-774829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)      kubelet          Node embed-certs-774829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)      kubelet          Node embed-certs-774829 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-774829 event: Registered Node embed-certs-774829 in Controller
	
	
	==> dmesg <==
	[Oct18 13:03] overlayfs: idmapped layers are currently not supported
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c9201764369f43ee1bb5e0a3d7d47a5bff8966959e69a3db59c9b1d1b71735b1] <==
	{"level":"warn","ts":"2025-10-18T13:26:21.239130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.265811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.284704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.298222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.315845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.363451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.370197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.389171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.450586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.456916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.479712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.526317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.570945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.607511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.631883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.671970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.690015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.733631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.763102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.779953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.817150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.907698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.918542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:21.941181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:26:22.109356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60808","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:27:20 up  5:09,  0 user,  load average: 2.98, 2.88, 2.49
	Linux embed-certs-774829 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [deb79053d475ceade5869b7a5c80b59e86ff337adc487a96c4db827d88d518dd] <==
	I1018 13:26:24.636749       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:26:24.637229       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:26:24.637412       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:26:24.707928       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:26:24.708058       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:26:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:26:24.912926       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:26:24.912945       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:26:24.912953       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:26:24.913668       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:26:54.913270       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:26:54.913391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:26:54.913305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:26:54.914715       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:26:56.513398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:26:56.513426       1 metrics.go:72] Registering metrics
	I1018 13:26:56.513489       1 controller.go:711] "Syncing nftables rules"
	I1018 13:27:04.912836       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:27:04.912968       1 main.go:301] handling current node
	I1018 13:27:14.917298       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 13:27:14.917332       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7920a44c552e4c5e2ad627678ddd2e1ca5f62a7398b052140a83b7d76c068d6e] <==
	I1018 13:26:23.506522       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:26:23.509669       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:26:23.509764       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:26:23.509784       1 policy_source.go:240] refreshing policies
	I1018 13:26:23.511491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 13:26:23.519447       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:26:23.520581       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 13:26:23.520633       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:26:23.520670       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:26:23.525633       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:26:23.525655       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:26:23.525662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:26:23.525669       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:26:23.551749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:26:24.028994       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:26:24.077866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:26:24.650310       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:26:24.881589       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:26:24.944894       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:26:24.969745       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:26:25.275152       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.180.39"}
	I1018 13:26:25.298737       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.112.4"}
	I1018 13:26:27.924006       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:26:28.024563       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:26:28.076305       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fa361f5a5688b380f5f99d0c7c6b08eba214e61325f08a0579323568e2dc4974] <==
	I1018 13:26:27.530961       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:26:27.531015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:26:27.531044       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:26:27.531066       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:26:27.531072       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:26:27.534651       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:26:27.536019       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:26:27.539295       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 13:26:27.539303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:26:27.546499       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 13:26:27.547765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:26:27.547787       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:26:27.547795       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:26:27.549674       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 13:26:27.552483       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:26:27.556145       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:26:27.556323       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:26:27.557634       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:26:27.568423       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 13:26:27.568425       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:26:27.568452       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 13:26:27.568472       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 13:26:27.568482       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 13:26:27.572701       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:26:27.579908       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [01dfb2bcdca8f72f569ed8490d352da5859334740e43e096120f437c0d4ad559] <==
	I1018 13:26:24.764435       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:26:25.418946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:26:25.522516       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:26:25.522622       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:26:25.522722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:26:25.585043       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:26:25.585176       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:26:25.589819       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:26:25.590432       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:26:25.590960       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:26:25.592506       1 config.go:200] "Starting service config controller"
	I1018 13:26:25.592597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:26:25.592653       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:26:25.592693       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:26:25.592725       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:26:25.592766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:26:25.597337       1 config.go:309] "Starting node config controller"
	I1018 13:26:25.597436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:26:25.597469       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:26:25.694699       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:26:25.697216       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:26:25.697269       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a43c33d591b5ab9bb0ab2cf0448a86a485b202dc1d02bb68cae0cb40cd379794] <==
	I1018 13:26:24.068583       1 serving.go:386] Generated self-signed cert in-memory
	I1018 13:26:25.672739       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:26:25.672776       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:26:25.678731       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 13:26:25.678841       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 13:26:25.678906       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:26:25.678943       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:26:25.678986       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:26:25.679015       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:26:25.679442       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:26:25.679567       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:26:25.779264       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:26:25.779391       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 13:26:25.779515       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:26:28 embed-certs-774829 kubelet[778]: I1018 13:26:28.260432     778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6kg\" (UniqueName: \"kubernetes.io/projected/03a1777f-b7cc-407d-9621-3fa0e485871b-kube-api-access-bf6kg\") pod \"kubernetes-dashboard-855c9754f9-vk5gp\" (UID: \"03a1777f-b7cc-407d-9621-3fa0e485871b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vk5gp"
	Oct 18 13:26:28 embed-certs-774829 kubelet[778]: W1018 13:26:28.519088     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/crio-2088bc643918fdf1c6afdf2169cf9d8b541ecc91e940a70a5ccadf712b3b52f4 WatchSource:0}: Error finding container 2088bc643918fdf1c6afdf2169cf9d8b541ecc91e940a70a5ccadf712b3b52f4: Status 404 returned error can't find the container with id 2088bc643918fdf1c6afdf2169cf9d8b541ecc91e940a70a5ccadf712b3b52f4
	Oct 18 13:26:28 embed-certs-774829 kubelet[778]: W1018 13:26:28.529328     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43d79c77c4e3bf42de08e10af4edd6d5cc8f6d259c24f801f41391deaf8af5a5/crio-00cfde4abbb8bedd7f01497250b4e0ea4f8ad892820b0a81ea54b0d0d7396368 WatchSource:0}: Error finding container 00cfde4abbb8bedd7f01497250b4e0ea4f8ad892820b0a81ea54b0d0d7396368: Status 404 returned error can't find the container with id 00cfde4abbb8bedd7f01497250b4e0ea4f8ad892820b0a81ea54b0d0d7396368
	Oct 18 13:26:30 embed-certs-774829 kubelet[778]: I1018 13:26:30.159585     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 13:26:33 embed-certs-774829 kubelet[778]: I1018 13:26:33.045366     778 scope.go:117] "RemoveContainer" containerID="5131c6c9695e711b196ca339b5992b2c1e09086117d9b0f783e724eb9734a848"
	Oct 18 13:26:34 embed-certs-774829 kubelet[778]: I1018 13:26:34.052095     778 scope.go:117] "RemoveContainer" containerID="5131c6c9695e711b196ca339b5992b2c1e09086117d9b0f783e724eb9734a848"
	Oct 18 13:26:34 embed-certs-774829 kubelet[778]: I1018 13:26:34.057190     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:34 embed-certs-774829 kubelet[778]: E1018 13:26:34.057465     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:35 embed-certs-774829 kubelet[778]: I1018 13:26:35.070580     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:35 embed-certs-774829 kubelet[778]: E1018 13:26:35.070737     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:38 embed-certs-774829 kubelet[778]: I1018 13:26:38.484716     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:38 embed-certs-774829 kubelet[778]: E1018 13:26:38.484912     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:39 embed-certs-774829 kubelet[778]: I1018 13:26:39.119338     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vk5gp" podStartSLOduration=1.26627416 podStartE2EDuration="11.119314546s" podCreationTimestamp="2025-10-18 13:26:28 +0000 UTC" firstStartedPulling="2025-10-18 13:26:28.533309626 +0000 UTC m=+10.856640776" lastFinishedPulling="2025-10-18 13:26:38.386350013 +0000 UTC m=+20.709681162" observedRunningTime="2025-10-18 13:26:39.118445408 +0000 UTC m=+21.441776590" watchObservedRunningTime="2025-10-18 13:26:39.119314546 +0000 UTC m=+21.442645696"
	Oct 18 13:26:49 embed-certs-774829 kubelet[778]: I1018 13:26:49.910614     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:50 embed-certs-774829 kubelet[778]: I1018 13:26:50.130421     778 scope.go:117] "RemoveContainer" containerID="91f9fdc684ee586150035a407bf931fc602c4324bf4e626f0e28d63eb3718af9"
	Oct 18 13:26:51 embed-certs-774829 kubelet[778]: I1018 13:26:51.134771     778 scope.go:117] "RemoveContainer" containerID="8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	Oct 18 13:26:51 embed-certs-774829 kubelet[778]: E1018 13:26:51.134934     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:26:55 embed-certs-774829 kubelet[778]: I1018 13:26:55.145969     778 scope.go:117] "RemoveContainer" containerID="3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92"
	Oct 18 13:26:58 embed-certs-774829 kubelet[778]: I1018 13:26:58.484043     778 scope.go:117] "RemoveContainer" containerID="8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	Oct 18 13:26:58 embed-certs-774829 kubelet[778]: E1018 13:26:58.484822     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:27:08 embed-certs-774829 kubelet[778]: I1018 13:27:08.910666     778 scope.go:117] "RemoveContainer" containerID="8efcbcc6f4ab9fb9ddcae961b1b43f3b542121814522a54aa89d934f896b9e79"
	Oct 18 13:27:08 embed-certs-774829 kubelet[778]: E1018 13:27:08.911304     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cmlx5_kubernetes-dashboard(8c86f4f8-1892-45d5-8cdf-4898967a4ce6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cmlx5" podUID="8c86f4f8-1892-45d5-8cdf-4898967a4ce6"
	Oct 18 13:27:14 embed-certs-774829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:27:14 embed-certs-774829 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:27:14 embed-certs-774829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2b8230f4d1bb2af92d33d63d23eda6f397401cdbcc30e1fe9bcc5378a56e47d5] <==
	2025/10/18 13:26:38 Using namespace: kubernetes-dashboard
	2025/10/18 13:26:38 Using in-cluster config to connect to apiserver
	2025/10/18 13:26:38 Using secret token for csrf signing
	2025/10/18 13:26:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:26:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:26:38 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 13:26:38 Generating JWE encryption key
	2025/10/18 13:26:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:26:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:26:38 Initializing JWE encryption key from synchronized object
	2025/10/18 13:26:38 Creating in-cluster Sidecar client
	2025/10/18 13:26:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:26:38 Serving insecurely on HTTP port: 9090
	2025/10/18 13:27:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:26:38 Starting overwatch
	
	
	==> storage-provisioner [3e01b6016331229d40e0a7c37b38857960dd32893c3bfe6e0a6654dd88a59a92] <==
	I1018 13:26:24.586799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:26:54.589084       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [79177034fd251f596fef4a9e3a5587dea34ee72dbecb6fa883e108c3808a0287] <==
	I1018 13:26:55.195033       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 13:26:55.208507       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:26:55.208627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:26:55.211783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:26:58.666617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:02.926344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:06.525543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:09.579888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:12.602411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:12.607805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:27:12.608029       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:27:12.608220       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-774829_b58f3281-3237-4631-8b2a-3d92bee98ae7!
	I1018 13:27:12.609455       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8e819a57-5518-4431-a3ad-90de48f83d9c", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-774829_b58f3281-3237-4631-8b2a-3d92bee98ae7 became leader
	W1018 13:27:12.615348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:12.627496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:27:12.708658       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-774829_b58f3281-3237-4631-8b2a-3d92bee98ae7!
	W1018 13:27:14.631066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:14.638681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:16.642211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:16.646632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:18.651461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:18.667927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:20.671003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:20.680758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-774829 -n embed-certs-774829
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-774829 -n embed-certs-774829: exit status 2 (547.906487ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-774829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (302.544959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-977407
helpers_test.go:243: (dbg) docker inspect newest-cni-977407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8",
	        "Created": "2025-10-18T13:27:32.409614447Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1043312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:27:32.506333267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/hosts",
	        "LogPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8-json.log",
	        "Name": "/newest-cni-977407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-977407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-977407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8",
	                "LowerDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-977407",
	                "Source": "/var/lib/docker/volumes/newest-cni-977407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-977407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-977407",
	                "name.minikube.sigs.k8s.io": "newest-cni-977407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec49ccb4612f5620dfac9312f1dc8f80507ce561adde36e3a1978dd44ee43226",
	            "SandboxKey": "/var/run/docker/netns/ec49ccb4612f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34198"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34201"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34199"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34200"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-977407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:10:ce:2f:44:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6e5d236d58bbb84ba4cff1833e88a247959569bfbd2830bebe94b5f1ed831d0",
	                    "EndpointID": "31868db1e64d87b884c28e7ffe1546fb3c141d3cfeaf9c14a5abfc1f9ce4368a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-977407",
	                        "fb38573e5ba6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-977407 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-977407 logs -n 25: (1.090122208s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-779884 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ addons  │ enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:23 UTC │ 18 Oct 25 13:24 UTC │
	│ delete  │ -p cert-expiration-076887                                                                                                                                                                                                                     │ cert-expiration-076887       │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:24 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                                                                                                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:27:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:27:25.534656 1042751 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:27:25.534783 1042751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:25.534794 1042751 out.go:374] Setting ErrFile to fd 2...
	I1018 13:27:25.534800 1042751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:27:25.535066 1042751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:27:25.535578 1042751 out.go:368] Setting JSON to false
	I1018 13:27:25.536605 1042751 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18598,"bootTime":1760775448,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:27:25.536677 1042751 start.go:141] virtualization:  
	I1018 13:27:25.540421 1042751 out.go:179] * [newest-cni-977407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:27:25.544258 1042751 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:27:25.544397 1042751 notify.go:220] Checking for updates...
	I1018 13:27:25.550494 1042751 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:27:25.553294 1042751 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:27:25.555494 1042751 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:27:25.559151 1042751 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:27:25.562233 1042751 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1018 13:27:22.324733 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:24.328922 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:25.566054 1042751 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:25.566227 1042751 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:27:25.615712 1042751 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:27:25.615856 1042751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:25.746774 1042751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:27:25.732263804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:25.746874 1042751 docker.go:318] overlay module found
	I1018 13:27:25.751852 1042751 out.go:179] * Using the docker driver based on user configuration
	I1018 13:27:25.755630 1042751 start.go:305] selected driver: docker
	I1018 13:27:25.755668 1042751 start.go:925] validating driver "docker" against <nil>
	I1018 13:27:25.755692 1042751 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:27:25.756508 1042751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:27:25.865732 1042751 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:27:25.854781697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:27:25.865879 1042751 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 13:27:25.865911 1042751 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 13:27:25.866127 1042751 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 13:27:25.871870 1042751 out.go:179] * Using Docker driver with root privileges
	I1018 13:27:25.874751 1042751 cni.go:84] Creating CNI manager for ""
	I1018 13:27:25.874846 1042751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:25.874860 1042751 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:27:25.874942 1042751 start.go:349] cluster config:
	{Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:25.877971 1042751 out.go:179] * Starting "newest-cni-977407" primary control-plane node in "newest-cni-977407" cluster
	I1018 13:27:25.880856 1042751 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:27:25.883950 1042751 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:27:25.886851 1042751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:25.886913 1042751 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:27:25.886933 1042751 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:27:25.886940 1042751 cache.go:58] Caching tarball of preloaded images
	I1018 13:27:25.887035 1042751 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:27:25.887045 1042751 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:27:25.887161 1042751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/config.json ...
	I1018 13:27:25.887178 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/config.json: {Name:mka1d603368a96ed484bf871a1f297a926e58425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:25.910607 1042751 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:27:25.910628 1042751 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:27:25.910641 1042751 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:27:25.910676 1042751 start.go:360] acquireMachinesLock for newest-cni-977407: {Name:mk0de410d37c351444ae892375ed0eca81429481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:27:25.910776 1042751 start.go:364] duration metric: took 85.212µs to acquireMachinesLock for "newest-cni-977407"
	I1018 13:27:25.910802 1042751 start.go:93] Provisioning new machine with config: &{Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:27:25.910883 1042751 start.go:125] createHost starting for "" (driver="docker")
	I1018 13:27:25.914359 1042751 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:27:25.914605 1042751 start.go:159] libmachine.API.Create for "newest-cni-977407" (driver="docker")
	I1018 13:27:25.914636 1042751 client.go:168] LocalClient.Create starting
	I1018 13:27:25.914721 1042751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:27:25.914768 1042751 main.go:141] libmachine: Decoding PEM data...
	I1018 13:27:25.914781 1042751 main.go:141] libmachine: Parsing certificate...
	I1018 13:27:25.914839 1042751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:27:25.914857 1042751 main.go:141] libmachine: Decoding PEM data...
	I1018 13:27:25.914867 1042751 main.go:141] libmachine: Parsing certificate...
	I1018 13:27:25.915227 1042751 cli_runner.go:164] Run: docker network inspect newest-cni-977407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:27:25.940802 1042751 cli_runner.go:211] docker network inspect newest-cni-977407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:27:25.940890 1042751 network_create.go:284] running [docker network inspect newest-cni-977407] to gather additional debugging logs...
	I1018 13:27:25.940907 1042751 cli_runner.go:164] Run: docker network inspect newest-cni-977407
	W1018 13:27:25.974030 1042751 cli_runner.go:211] docker network inspect newest-cni-977407 returned with exit code 1
	I1018 13:27:25.974058 1042751 network_create.go:287] error running [docker network inspect newest-cni-977407]: docker network inspect newest-cni-977407: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-977407 not found
	I1018 13:27:25.974071 1042751 network_create.go:289] output of [docker network inspect newest-cni-977407]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-977407 not found
	
	** /stderr **
	I1018 13:27:25.974184 1042751 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:27:26.006398 1042751 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:27:26.006828 1042751 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:27:26.007115 1042751 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:27:26.007561 1042751 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1af80}
	I1018 13:27:26.007583 1042751 network_create.go:124] attempt to create docker network newest-cni-977407 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 13:27:26.007678 1042751 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-977407 newest-cni-977407
	I1018 13:27:26.110179 1042751 network_create.go:108] docker network newest-cni-977407 192.168.76.0/24 created
	I1018 13:27:26.110208 1042751 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-977407" container
	I1018 13:27:26.110292 1042751 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:27:26.133304 1042751 cli_runner.go:164] Run: docker volume create newest-cni-977407 --label name.minikube.sigs.k8s.io=newest-cni-977407 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:27:26.159553 1042751 oci.go:103] Successfully created a docker volume newest-cni-977407
	I1018 13:27:26.159639 1042751 cli_runner.go:164] Run: docker run --rm --name newest-cni-977407-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-977407 --entrypoint /usr/bin/test -v newest-cni-977407:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:27:26.991935 1042751 oci.go:107] Successfully prepared a docker volume newest-cni-977407
	I1018 13:27:26.991981 1042751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:26.991999 1042751 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:27:26.992071 1042751 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-977407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 13:27:26.818710 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:28.822444 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:32.335919 1042751 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-977407:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.343800955s)
	I1018 13:27:32.335952 1042751 kic.go:203] duration metric: took 5.343948813s to extract preloaded images to volume ...
	W1018 13:27:32.336099 1042751 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:27:32.336221 1042751 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:27:32.391759 1042751 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-977407 --name newest-cni-977407 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-977407 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-977407 --network newest-cni-977407 --ip 192.168.76.2 --volume newest-cni-977407:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:27:32.729633 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Running}}
	I1018 13:27:32.752744 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:27:32.781153 1042751 cli_runner.go:164] Run: docker exec newest-cni-977407 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:27:32.838568 1042751 oci.go:144] the created container "newest-cni-977407" has a running status.
	I1018 13:27:32.838594 1042751 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa...
	I1018 13:27:33.197590 1042751 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:27:33.221392 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:27:33.249338 1042751 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:27:33.249365 1042751 kic_runner.go:114] Args: [docker exec --privileged newest-cni-977407 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:27:33.298186 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:27:33.326948 1042751 machine.go:93] provisionDockerMachine start ...
	I1018 13:27:33.327051 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:33.358406 1042751 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:33.358744 1042751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1018 13:27:33.358756 1042751 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:27:33.359346 1042751 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42104->127.0.0.1:34197: read: connection reset by peer
	W1018 13:27:31.317934 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:33.325384 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:36.507586 1042751 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-977407
	
	I1018 13:27:36.507670 1042751 ubuntu.go:182] provisioning hostname "newest-cni-977407"
	I1018 13:27:36.507763 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:36.525925 1042751 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:36.526242 1042751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1018 13:27:36.526260 1042751 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-977407 && echo "newest-cni-977407" | sudo tee /etc/hostname
	I1018 13:27:36.686037 1042751 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-977407
	
	I1018 13:27:36.686137 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:36.706283 1042751 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:36.706596 1042751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1018 13:27:36.706618 1042751 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-977407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-977407/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-977407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:27:36.855836 1042751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:27:36.855915 1042751 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:27:36.855946 1042751 ubuntu.go:190] setting up certificates
	I1018 13:27:36.855958 1042751 provision.go:84] configureAuth start
	I1018 13:27:36.856041 1042751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-977407
	I1018 13:27:36.874246 1042751 provision.go:143] copyHostCerts
	I1018 13:27:36.874320 1042751 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:27:36.874340 1042751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:27:36.874423 1042751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:27:36.874534 1042751 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:27:36.874547 1042751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:27:36.874576 1042751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:27:36.874647 1042751 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:27:36.874657 1042751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:27:36.874683 1042751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:27:36.874743 1042751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.newest-cni-977407 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-977407]
	I1018 13:27:37.430968 1042751 provision.go:177] copyRemoteCerts
	I1018 13:27:37.431039 1042751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:27:37.431080 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:37.448733 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:27:37.551875 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:27:37.570237 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 13:27:37.588644 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 13:27:37.608488 1042751 provision.go:87] duration metric: took 752.508873ms to configureAuth
	I1018 13:27:37.608516 1042751 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:27:37.608716 1042751 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:27:37.608839 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:37.626571 1042751 main.go:141] libmachine: Using SSH client type: native
	I1018 13:27:37.626892 1042751 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I1018 13:27:37.626911 1042751 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:27:37.973100 1042751 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:27:37.973118 1042751 machine.go:96] duration metric: took 4.646153504s to provisionDockerMachine
	I1018 13:27:37.973128 1042751 client.go:171] duration metric: took 12.058486274s to LocalClient.Create
	I1018 13:27:37.973157 1042751 start.go:167] duration metric: took 12.058539018s to libmachine.API.Create "newest-cni-977407"
	I1018 13:27:37.973164 1042751 start.go:293] postStartSetup for "newest-cni-977407" (driver="docker")
	I1018 13:27:37.973174 1042751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:27:37.973243 1042751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:27:37.973284 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:37.991079 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:27:38.096689 1042751 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:27:38.100211 1042751 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:27:38.100240 1042751 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:27:38.100252 1042751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:27:38.100309 1042751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:27:38.100402 1042751 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:27:38.100518 1042751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:27:38.108334 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:27:38.127084 1042751 start.go:296] duration metric: took 153.904951ms for postStartSetup
	I1018 13:27:38.127466 1042751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-977407
	I1018 13:27:38.144422 1042751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/config.json ...
	I1018 13:27:38.144713 1042751 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:27:38.144762 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:38.174921 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:27:38.281044 1042751 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:27:38.285738 1042751 start.go:128] duration metric: took 12.374840966s to createHost
	I1018 13:27:38.285764 1042751 start.go:83] releasing machines lock for "newest-cni-977407", held for 12.374979626s
	I1018 13:27:38.285837 1042751 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-977407
	I1018 13:27:38.307998 1042751 ssh_runner.go:195] Run: cat /version.json
	I1018 13:27:38.308033 1042751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:27:38.308049 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:38.308085 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:27:38.329975 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:27:38.348339 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:27:38.435932 1042751 ssh_runner.go:195] Run: systemctl --version
	I1018 13:27:38.543424 1042751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:27:38.581608 1042751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:27:38.586057 1042751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:27:38.586131 1042751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:27:38.618709 1042751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:27:38.618789 1042751 start.go:495] detecting cgroup driver to use...
	I1018 13:27:38.618859 1042751 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:27:38.618942 1042751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:27:38.637780 1042751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:27:38.650689 1042751 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:27:38.650774 1042751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:27:38.669083 1042751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:27:38.689474 1042751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:27:38.824474 1042751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:27:38.959285 1042751 docker.go:234] disabling docker service ...
	I1018 13:27:38.959372 1042751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:27:38.983413 1042751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:27:38.997755 1042751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:27:39.124374 1042751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:27:39.251951 1042751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:27:39.267067 1042751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:27:39.282420 1042751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:27:39.282533 1042751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.291361 1042751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:27:39.291496 1042751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.300937 1042751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.309607 1042751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.322522 1042751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:27:39.331089 1042751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.340341 1042751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.355210 1042751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:27:39.364847 1042751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:27:39.372483 1042751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:27:39.380474 1042751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:39.507390 1042751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:27:39.671000 1042751 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:27:39.671111 1042751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:27:39.675234 1042751 start.go:563] Will wait 60s for crictl version
	I1018 13:27:39.675331 1042751 ssh_runner.go:195] Run: which crictl
	I1018 13:27:39.679094 1042751 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:27:39.708554 1042751 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:27:39.708680 1042751 ssh_runner.go:195] Run: crio --version
	I1018 13:27:39.740638 1042751 ssh_runner.go:195] Run: crio --version
	I1018 13:27:39.778353 1042751 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 13:27:39.781220 1042751 cli_runner.go:164] Run: docker network inspect newest-cni-977407 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:27:39.796983 1042751 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 13:27:39.800839 1042751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:27:39.813852 1042751 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 13:27:39.816630 1042751 kubeadm.go:883] updating cluster {Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 13:27:39.816769 1042751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:27:39.816861 1042751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:27:39.851998 1042751 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:27:39.852025 1042751 crio.go:433] Images already preloaded, skipping extraction
	I1018 13:27:39.852087 1042751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 13:27:39.884898 1042751 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 13:27:39.884921 1042751 cache_images.go:85] Images are preloaded, skipping loading
	I1018 13:27:39.884929 1042751 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 13:27:39.885036 1042751 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-977407 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 13:27:39.885126 1042751 ssh_runner.go:195] Run: crio config
	I1018 13:27:39.943302 1042751 cni.go:84] Creating CNI manager for ""
	I1018 13:27:39.943327 1042751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:39.943343 1042751 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 13:27:39.943367 1042751 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-977407 NodeName:newest-cni-977407 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 13:27:39.943501 1042751 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-977407"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 13:27:39.943577 1042751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 13:27:39.952739 1042751 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 13:27:39.952816 1042751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 13:27:39.960520 1042751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 13:27:39.973346 1042751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 13:27:39.986858 1042751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 13:27:40.005874 1042751 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 13:27:40.014542 1042751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 13:27:40.031706 1042751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:27:40.172147 1042751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:27:40.192399 1042751 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407 for IP: 192.168.76.2
	I1018 13:27:40.192424 1042751 certs.go:195] generating shared ca certs ...
	I1018 13:27:40.192442 1042751 certs.go:227] acquiring lock for ca certs: {Name:mke3bd2a69e1a2c8eeacc728651996fb6d634fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:40.192668 1042751 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key
	I1018 13:27:40.192734 1042751 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key
	I1018 13:27:40.192750 1042751 certs.go:257] generating profile certs ...
	I1018 13:27:40.192828 1042751 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/client.key
	I1018 13:27:40.192847 1042751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/client.crt with IP's: []
	I1018 13:27:40.425629 1042751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/client.crt ...
	I1018 13:27:40.425664 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/client.crt: {Name:mk3967861d5c5d59372934ed3358b4c994b7b7ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:40.425866 1042751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/client.key ...
	I1018 13:27:40.425879 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/client.key: {Name:mka2da43d88735551dc6f5e223db293362c6c19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:40.425963 1042751 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.key.5b807c9c
	I1018 13:27:40.425983 1042751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.crt.5b807c9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	W1018 13:27:35.818821 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:37.818936 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:39.819438 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:40.586966 1042751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.crt.5b807c9c ...
	I1018 13:27:40.586999 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.crt.5b807c9c: {Name:mkbac08dec54b30025bf336161b1454142ea41a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:40.587184 1042751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.key.5b807c9c ...
	I1018 13:27:40.587200 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.key.5b807c9c: {Name:mk5c24476e6961135a2efbdf8f1343931d740441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:40.587285 1042751 certs.go:382] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.crt.5b807c9c -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.crt
	I1018 13:27:40.587361 1042751 certs.go:386] copying /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.key.5b807c9c -> /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.key
	I1018 13:27:40.587420 1042751 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.key
	I1018 13:27:40.587439 1042751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.crt with IP's: []
	I1018 13:27:41.657092 1042751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.crt ...
	I1018 13:27:41.657125 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.crt: {Name:mkefebd1eb8800cc4650944353523680a7895bbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:41.657315 1042751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.key ...
	I1018 13:27:41.657334 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.key: {Name:mk15ab5717edf1c672a0052b4998e1e7f4ffb4f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:27:41.657513 1042751 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem (1338 bytes)
	W1018 13:27:41.657559 1042751 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086_empty.pem, impossibly tiny 0 bytes
	I1018 13:27:41.657578 1042751 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 13:27:41.657601 1042751 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem (1082 bytes)
	I1018 13:27:41.657626 1042751 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem (1123 bytes)
	I1018 13:27:41.657653 1042751 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem (1675 bytes)
	I1018 13:27:41.657693 1042751 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:27:41.658266 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 13:27:41.690909 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1671 bytes)
	I1018 13:27:41.711955 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 13:27:41.733364 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 13:27:41.751199 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 13:27:41.772388 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 13:27:41.791760 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 13:27:41.811981 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 13:27:41.835714 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 13:27:41.855157 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/836086.pem --> /usr/share/ca-certificates/836086.pem (1338 bytes)
	I1018 13:27:41.874804 1042751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /usr/share/ca-certificates/8360862.pem (1708 bytes)
	I1018 13:27:41.892559 1042751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 13:27:41.906366 1042751 ssh_runner.go:195] Run: openssl version
	I1018 13:27:41.915074 1042751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 13:27:41.924682 1042751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:41.928483 1042751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:16 /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:41.928553 1042751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 13:27:41.970153 1042751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 13:27:41.979198 1042751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/836086.pem && ln -fs /usr/share/ca-certificates/836086.pem /etc/ssl/certs/836086.pem"
	I1018 13:27:41.987999 1042751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/836086.pem
	I1018 13:27:41.992093 1042751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:23 /usr/share/ca-certificates/836086.pem
	I1018 13:27:41.992173 1042751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/836086.pem
	I1018 13:27:42.037065 1042751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/836086.pem /etc/ssl/certs/51391683.0"
	I1018 13:27:42.046103 1042751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8360862.pem && ln -fs /usr/share/ca-certificates/8360862.pem /etc/ssl/certs/8360862.pem"
	I1018 13:27:42.054799 1042751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8360862.pem
	I1018 13:27:42.058919 1042751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:23 /usr/share/ca-certificates/8360862.pem
	I1018 13:27:42.058994 1042751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8360862.pem
	I1018 13:27:42.103442 1042751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8360862.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 13:27:42.114016 1042751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 13:27:42.118953 1042751 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 13:27:42.119071 1042751 kubeadm.go:400] StartCluster: {Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:27:42.119218 1042751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 13:27:42.119311 1042751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 13:27:42.154446 1042751 cri.go:89] found id: ""
	I1018 13:27:42.154559 1042751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 13:27:42.166271 1042751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 13:27:42.177946 1042751 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 13:27:42.178032 1042751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 13:27:42.188415 1042751 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 13:27:42.188443 1042751 kubeadm.go:157] found existing configuration files:
	
	I1018 13:27:42.188511 1042751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 13:27:42.199193 1042751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 13:27:42.199338 1042751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 13:27:42.211647 1042751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 13:27:42.223351 1042751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 13:27:42.223427 1042751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 13:27:42.233037 1042751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 13:27:42.242308 1042751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 13:27:42.242407 1042751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 13:27:42.251399 1042751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 13:27:42.260212 1042751 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 13:27:42.260309 1042751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 13:27:42.269377 1042751 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 13:27:42.315423 1042751 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 13:27:42.315532 1042751 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 13:27:42.345992 1042751 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 13:27:42.346070 1042751 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 13:27:42.346113 1042751 kubeadm.go:318] OS: Linux
	I1018 13:27:42.346170 1042751 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 13:27:42.346227 1042751 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 13:27:42.346289 1042751 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 13:27:42.346345 1042751 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 13:27:42.346402 1042751 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 13:27:42.346464 1042751 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 13:27:42.346517 1042751 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 13:27:42.346571 1042751 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 13:27:42.346622 1042751 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 13:27:42.421313 1042751 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 13:27:42.421468 1042751 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 13:27:42.421594 1042751 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 13:27:42.430664 1042751 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 13:27:42.434752 1042751 out.go:252]   - Generating certificates and keys ...
	I1018 13:27:42.434868 1042751 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 13:27:42.434972 1042751 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 13:27:42.644638 1042751 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 13:27:43.533589 1042751 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 13:27:44.839123 1042751 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 13:27:45.080661 1042751 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1018 13:27:42.319381 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:44.321732 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:45.539476 1042751 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 13:27:45.539853 1042751 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-977407] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 13:27:46.091958 1042751 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 13:27:46.092374 1042751 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-977407] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 13:27:46.235693 1042751 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 13:27:46.320510 1042751 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 13:27:46.690576 1042751 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 13:27:46.690865 1042751 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 13:27:46.918451 1042751 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 13:27:47.926091 1042751 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 13:27:48.449929 1042751 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 13:27:49.049724 1042751 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 13:27:49.394535 1042751 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 13:27:49.395181 1042751 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 13:27:49.397909 1042751 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 13:27:49.401625 1042751 out.go:252]   - Booting up control plane ...
	I1018 13:27:49.401753 1042751 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 13:27:49.401845 1042751 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 13:27:49.401919 1042751 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 13:27:49.420259 1042751 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 13:27:49.420620 1042751 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 13:27:49.430179 1042751 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 13:27:49.431019 1042751 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 13:27:49.431528 1042751 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 13:27:49.573636 1042751 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 13:27:49.573768 1042751 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1018 13:27:46.819895 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:48.825175 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:51.075315 1042751 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501785515s
	I1018 13:27:51.079230 1042751 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 13:27:51.079333 1042751 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 13:27:51.079549 1042751 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 13:27:51.079689 1042751 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 13:27:53.807932 1042751 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.728407789s
	W1018 13:27:51.318979 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	W1018 13:27:53.818399 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:55.920654 1042751 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.84142418s
	I1018 13:27:57.581476 1042751 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502086317s
	I1018 13:27:57.601028 1042751 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 13:27:57.617663 1042751 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 13:27:57.632165 1042751 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 13:27:57.632396 1042751 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-977407 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 13:27:57.644813 1042751 kubeadm.go:318] [bootstrap-token] Using token: 19irzf.bo1r3clfbsdkqe1q
	W1018 13:27:55.818904 1039404 pod_ready.go:104] pod "coredns-66bc5c9577-2g4gz" is not "Ready", error: <nil>
	I1018 13:27:56.318765 1039404 pod_ready.go:94] pod "coredns-66bc5c9577-2g4gz" is "Ready"
	I1018 13:27:56.318797 1039404 pod_ready.go:86] duration metric: took 40.506100855s for pod "coredns-66bc5c9577-2g4gz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.322142 1039404 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.328252 1039404 pod_ready.go:94] pod "etcd-default-k8s-diff-port-208258" is "Ready"
	I1018 13:27:56.328283 1039404 pod_ready.go:86] duration metric: took 6.102807ms for pod "etcd-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.331405 1039404 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.337208 1039404 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-208258" is "Ready"
	I1018 13:27:56.337247 1039404 pod_ready.go:86] duration metric: took 5.810676ms for pod "kube-apiserver-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.340538 1039404 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.516866 1039404 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-208258" is "Ready"
	I1018 13:27:56.516892 1039404 pod_ready.go:86] duration metric: took 176.324384ms for pod "kube-controller-manager-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:56.717440 1039404 pod_ready.go:83] waiting for pod "kube-proxy-q5bvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:57.116920 1039404 pod_ready.go:94] pod "kube-proxy-q5bvt" is "Ready"
	I1018 13:27:57.116950 1039404 pod_ready.go:86] duration metric: took 399.479289ms for pod "kube-proxy-q5bvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:57.316948 1039404 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:57.716955 1039404 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-208258" is "Ready"
	I1018 13:27:57.716983 1039404 pod_ready.go:86] duration metric: took 400.010839ms for pod "kube-scheduler-default-k8s-diff-port-208258" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 13:27:57.716995 1039404 pod_ready.go:40] duration metric: took 41.910164845s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 13:27:57.776913 1039404 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:27:57.780092 1039404 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-208258" cluster and "default" namespace by default
	I1018 13:27:57.647735 1042751 out.go:252]   - Configuring RBAC rules ...
	I1018 13:27:57.647873 1042751 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 13:27:57.651946 1042751 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 13:27:57.660187 1042751 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 13:27:57.666754 1042751 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 13:27:57.671624 1042751 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 13:27:57.676254 1042751 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 13:27:57.992189 1042751 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 13:27:58.484578 1042751 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 13:27:58.988724 1042751 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 13:27:58.990056 1042751 kubeadm.go:318] 
	I1018 13:27:58.990160 1042751 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 13:27:58.990174 1042751 kubeadm.go:318] 
	I1018 13:27:58.990301 1042751 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 13:27:58.990316 1042751 kubeadm.go:318] 
	I1018 13:27:58.990343 1042751 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 13:27:58.990406 1042751 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 13:27:58.990459 1042751 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 13:27:58.990463 1042751 kubeadm.go:318] 
	I1018 13:27:58.990520 1042751 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 13:27:58.990524 1042751 kubeadm.go:318] 
	I1018 13:27:58.990580 1042751 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 13:27:58.990585 1042751 kubeadm.go:318] 
	I1018 13:27:58.990639 1042751 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 13:27:58.990718 1042751 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 13:27:58.990789 1042751 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 13:27:58.990794 1042751 kubeadm.go:318] 
	I1018 13:27:58.990881 1042751 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 13:27:58.990962 1042751 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 13:27:58.990966 1042751 kubeadm.go:318] 
	I1018 13:27:58.991054 1042751 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 19irzf.bo1r3clfbsdkqe1q \
	I1018 13:27:58.991163 1042751 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e \
	I1018 13:27:58.991184 1042751 kubeadm.go:318] 	--control-plane 
	I1018 13:27:58.991189 1042751 kubeadm.go:318] 
	I1018 13:27:58.991278 1042751 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 13:27:58.991283 1042751 kubeadm.go:318] 
	I1018 13:27:58.991369 1042751 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 19irzf.bo1r3clfbsdkqe1q \
	I1018 13:27:58.991476 1042751 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:1c82b1da5f4abbff8392102787076f8136062ebad72c7a702a79989c48c8be0e 
	I1018 13:27:58.994697 1042751 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 13:27:58.994933 1042751 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 13:27:58.995046 1042751 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 13:27:58.995062 1042751 cni.go:84] Creating CNI manager for ""
	I1018 13:27:58.995068 1042751 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:27:58.998324 1042751 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 13:27:59.004507 1042751 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 13:27:59.010060 1042751 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 13:27:59.010084 1042751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 13:27:59.031874 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 13:27:59.354709 1042751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 13:27:59.354853 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:27:59.354930 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-977407 minikube.k8s.io/updated_at=2025_10_18T13_27_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=newest-cni-977407 minikube.k8s.io/primary=true
	I1018 13:27:59.365673 1042751 ops.go:34] apiserver oom_adj: -16
	I1018 13:27:59.514285 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:00.031699 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:00.515174 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:01.014396 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:01.515140 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:02.014905 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:02.514379 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:03.014916 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:03.514410 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:04.015045 1042751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 13:28:04.170517 1042751 kubeadm.go:1113] duration metric: took 4.815710422s to wait for elevateKubeSystemPrivileges
	I1018 13:28:04.170549 1042751 kubeadm.go:402] duration metric: took 22.051485412s to StartCluster
	I1018 13:28:04.170566 1042751 settings.go:142] acquiring lock: {Name:mk5bf8d55d3f76468cdb0d2ca461ece56ab3043f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:28:04.170629 1042751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:28:04.171582 1042751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/kubeconfig: {Name:mk9d81e704441132e954a911f54f762a77297896 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:28:04.171840 1042751 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:28:04.171983 1042751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 13:28:04.172278 1042751 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:04.172319 1042751 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 13:28:04.172401 1042751 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-977407"
	I1018 13:28:04.172416 1042751 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-977407"
	I1018 13:28:04.172424 1042751 addons.go:69] Setting default-storageclass=true in profile "newest-cni-977407"
	I1018 13:28:04.172440 1042751 host.go:66] Checking if "newest-cni-977407" exists ...
	I1018 13:28:04.172444 1042751 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-977407"
	I1018 13:28:04.172796 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:04.173069 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:04.175166 1042751 out.go:179] * Verifying Kubernetes components...
	I1018 13:28:04.178311 1042751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:28:04.234856 1042751 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 13:28:04.238709 1042751 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:28:04.238732 1042751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 13:28:04.238810 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:04.241363 1042751 addons.go:238] Setting addon default-storageclass=true in "newest-cni-977407"
	I1018 13:28:04.241415 1042751 host.go:66] Checking if "newest-cni-977407" exists ...
	I1018 13:28:04.241850 1042751 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:04.276360 1042751 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 13:28:04.276392 1042751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 13:28:04.276465 1042751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:04.285026 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:28:04.323828 1042751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:28:04.519153 1042751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 13:28:04.524988 1042751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 13:28:04.595517 1042751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 13:28:04.702884 1042751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 13:28:05.034130 1042751 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 13:28:05.036812 1042751 api_server.go:52] waiting for apiserver process to appear ...
	I1018 13:28:05.037052 1042751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:28:05.289503 1042751 api_server.go:72] duration metric: took 1.117628116s to wait for apiserver process to appear ...
	I1018 13:28:05.289529 1042751 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:28:05.289548 1042751 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:28:05.314325 1042751 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:28:05.315545 1042751 api_server.go:141] control plane version: v1.34.1
	I1018 13:28:05.315620 1042751 api_server.go:131] duration metric: took 26.082721ms to wait for apiserver health ...
	I1018 13:28:05.315706 1042751 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:28:05.321183 1042751 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 13:28:05.323933 1042751 system_pods.go:59] 8 kube-system pods found
	I1018 13:28:05.323972 1042751 system_pods.go:61] "coredns-66bc5c9577-h2dzv" [7bf41590-b205-482b-a509-cca14eef8f53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:28:05.323981 1042751 system_pods.go:61] "etcd-newest-cni-977407" [e959f287-a8d0-4c66-882a-7bf03c0d596b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:28:05.323989 1042751 system_pods.go:61] "kindnet-g5rjn" [62df2833-c27f-44a7-932f-ddd5e8e4888e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 13:28:05.323996 1042751 system_pods.go:61] "kube-apiserver-newest-cni-977407" [dfc137e0-d480-483e-96e3-85ca7dba3e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:28:05.324003 1042751 system_pods.go:61] "kube-controller-manager-newest-cni-977407" [d43756f2-e9bd-413a-b29f-828c43157138] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:28:05.324010 1042751 system_pods.go:61] "kube-proxy-x4kds" [fd820b89-8782-4a68-8488-8eae7823ed4e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 13:28:05.324014 1042751 system_pods.go:61] "kube-scheduler-newest-cni-977407" [bbe144ae-f7e7-4fb9-b026-a17a60555951] Running
	I1018 13:28:05.324022 1042751 system_pods.go:61] "storage-provisioner" [4d216f4e-9951-4993-8149-3f06f900b895] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:28:05.324028 1042751 system_pods.go:74] duration metric: took 8.294307ms to wait for pod list to return data ...
	I1018 13:28:05.324037 1042751 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:28:05.325404 1042751 addons.go:514] duration metric: took 1.153072586s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 13:28:05.332830 1042751 default_sa.go:45] found service account: "default"
	I1018 13:28:05.332859 1042751 default_sa.go:55] duration metric: took 8.81643ms for default service account to be created ...
	I1018 13:28:05.332874 1042751 kubeadm.go:586] duration metric: took 1.161000972s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 13:28:05.332891 1042751 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:28:05.337051 1042751 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:28:05.337135 1042751 node_conditions.go:123] node cpu capacity is 2
	I1018 13:28:05.337165 1042751 node_conditions.go:105] duration metric: took 4.266962ms to run NodePressure ...
	I1018 13:28:05.337190 1042751 start.go:241] waiting for startup goroutines ...
	I1018 13:28:05.538924 1042751 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-977407" context rescaled to 1 replicas
	I1018 13:28:05.538969 1042751 start.go:246] waiting for cluster config update ...
	I1018 13:28:05.539001 1042751 start.go:255] writing updated cluster config ...
	I1018 13:28:05.539328 1042751 ssh_runner.go:195] Run: rm -f paused
	I1018 13:28:05.624422 1042751 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:28:05.627868 1042751 out.go:179] * Done! kubectl is now configured to use "newest-cni-977407" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.619885367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.626730049Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fa407a81-b2a8-4b37-9968-197931fdb2c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.642504705Z" level=info msg="Ran pod sandbox ebf127a4334cf560afb772f6bc898925368062bca843ee99232f22fc4edfe316 with infra container: kube-system/kindnet-g5rjn/POD" id=fa407a81-b2a8-4b37-9968-197931fdb2c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.645022814Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c37efd59-2d3f-4db5-9e4c-1cde22f13e67 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.647787925Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4b80922c-a8d4-4a66-bd97-cd675b207761 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.655407007Z" level=info msg="Creating container: kube-system/kindnet-g5rjn/kindnet-cni" id=2d29e67d-f860-4a6d-bedb-0790ece7af4d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.655831669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.667420323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.668176123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.699854617Z" level=info msg="Created container 8dcf51577eaf702ff73bc392e65680a148023b39a7fd78b9dafe8f67b6489dda: kube-system/kindnet-g5rjn/kindnet-cni" id=2d29e67d-f860-4a6d-bedb-0790ece7af4d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.701040247Z" level=info msg="Starting container: 8dcf51577eaf702ff73bc392e65680a148023b39a7fd78b9dafe8f67b6489dda" id=08b6b993-bffe-4ed8-a450-16c14527d060 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:28:04 newest-cni-977407 crio[836]: time="2025-10-18T13:28:04.716097208Z" level=info msg="Started container" PID=1482 containerID=8dcf51577eaf702ff73bc392e65680a148023b39a7fd78b9dafe8f67b6489dda description=kube-system/kindnet-g5rjn/kindnet-cni id=08b6b993-bffe-4ed8-a450-16c14527d060 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebf127a4334cf560afb772f6bc898925368062bca843ee99232f22fc4edfe316
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.232143278Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-x4kds/POD" id=34dc106d-e4b2-416c-93df-2387732565bd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.232224034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.237903779Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=34dc106d-e4b2-416c-93df-2387732565bd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.244990491Z" level=info msg="Ran pod sandbox a66c934714a0cf6b2094a5dd374abb6b3d81fc5ab68669e9564898d4a0605e2e with infra container: kube-system/kube-proxy-x4kds/POD" id=34dc106d-e4b2-416c-93df-2387732565bd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.246554201Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6f1004c7-3d11-4027-8a36-4065aba194ec name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.24813003Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b187064c-6797-42d7-876b-a2330e371fc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.253773591Z" level=info msg="Creating container: kube-system/kube-proxy-x4kds/kube-proxy" id=276c3fef-1810-41af-8cad-3ef543cc7df4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.254060051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.262369989Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.263210672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.294655571Z" level=info msg="Created container fd1359868f0bd8699e5960f30cfa127871031191cd2f1f24c1adaefa1e979b83: kube-system/kube-proxy-x4kds/kube-proxy" id=276c3fef-1810-41af-8cad-3ef543cc7df4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.296999951Z" level=info msg="Starting container: fd1359868f0bd8699e5960f30cfa127871031191cd2f1f24c1adaefa1e979b83" id=8e3c22d6-4088-4ba4-ba49-c4addc78bfcb name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:28:05 newest-cni-977407 crio[836]: time="2025-10-18T13:28:05.300100861Z" level=info msg="Started container" PID=1542 containerID=fd1359868f0bd8699e5960f30cfa127871031191cd2f1f24c1adaefa1e979b83 description=kube-system/kube-proxy-x4kds/kube-proxy id=8e3c22d6-4088-4ba4-ba49-c4addc78bfcb name=/runtime.v1.RuntimeService/StartContainer sandboxID=a66c934714a0cf6b2094a5dd374abb6b3d81fc5ab68669e9564898d4a0605e2e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	fd1359868f0bd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   a66c934714a0c       kube-proxy-x4kds                            kube-system
	8dcf51577eaf7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   ebf127a4334cf       kindnet-g5rjn                               kube-system
	bc54ce5478f9c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   9cb34b763d981       etcd-newest-cni-977407                      kube-system
	21719aae2fefe       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   15bdd0a9aacbf       kube-controller-manager-newest-cni-977407   kube-system
	c0fb7858e821a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   b3a7df556fb8b       kube-scheduler-newest-cni-977407            kube-system
	19915eef584b0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   2f59b828eb362       kube-apiserver-newest-cni-977407            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-977407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-977407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=newest-cni-977407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_27_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:27:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-977407
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:27:58 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:27:58 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:27:58 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 13:27:58 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-977407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f89834aa-d14f-47e3-baef-c9c838d135d3
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-977407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-g5rjn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-977407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-977407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-x4kds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-977407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 17s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 17s)  kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 17s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-977407 event: Registered Node newest-cni-977407 in Controller
	
	
	==> dmesg <==
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	[ +43.080166] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bc54ce5478f9c40347c15aea7dd1cb004659dddf0bc903ca3e5e02f751a3ea96] <==
	{"level":"warn","ts":"2025-10-18T13:27:54.382871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.386718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.404151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.417976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.441058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.454099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.476626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.506817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.514428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.551835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.590086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.611396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.613138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.638754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.678715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.684487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.711846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.736369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.768408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.785275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.819129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.858031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.884204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.908487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:54.991131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56502","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:28:07 up  5:10,  0 user,  load average: 3.35, 2.99, 2.54
	Linux newest-cni-977407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8dcf51577eaf702ff73bc392e65680a148023b39a7fd78b9dafe8f67b6489dda] <==
	I1018 13:28:04.818569       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:28:04.907998       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:28:04.908154       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:28:04.908167       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:28:04.908183       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:28:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:28:05.114009       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:28:05.114039       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:28:05.114048       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:28:05.114387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [19915eef584b0c08b34a06ed29bb0c535ccec6424a328a1f7711deb006488748] <==
	I1018 13:27:56.051117       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:27:56.051127       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:27:56.051134       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:27:56.068042       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:27:56.124199       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:27:56.124317       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 13:27:56.130801       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:27:56.131634       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:27:56.647291       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 13:27:56.652086       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 13:27:56.652183       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:27:57.392309       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:27:57.448504       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:27:57.553304       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 13:27:57.560832       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 13:27:57.562090       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:27:57.572602       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:27:57.879177       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:27:58.450768       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:27:58.481481       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 13:27:58.502932       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 13:28:03.732350       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:28:03.882513       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:28:03.887119       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:28:03.982894       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [21719aae2fefe069da0d8a1de3e7106df7841b58e6a78de0abdef744606dada1] <==
	I1018 13:28:02.911106       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 13:28:02.911149       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 13:28:02.911168       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 13:28:02.911178       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 13:28:02.911184       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:28:02.911526       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 13:28:02.922495       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:28:02.924283       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:28:02.924771       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-977407" podCIDRs=["10.42.0.0/24"]
	I1018 13:28:02.924817       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:28:02.924908       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 13:28:02.925151       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 13:28:02.925266       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 13:28:02.925382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:28:02.925428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:28:02.925460       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:28:02.925492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:28:02.927428       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 13:28:02.927521       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 13:28:02.927601       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-977407"
	I1018 13:28:02.927643       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 13:28:02.928144       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:28:02.929642       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:28:02.930020       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:28:02.932483       1 shared_informer.go:356] "Caches are synced" controller="expand"
	
	
	==> kube-proxy [fd1359868f0bd8699e5960f30cfa127871031191cd2f1f24c1adaefa1e979b83] <==
	I1018 13:28:05.359701       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:28:05.457460       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:28:05.557940       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:28:05.558058       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:28:05.558218       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:28:05.582005       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:28:05.582071       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:28:05.586475       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:28:05.586865       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:28:05.587123       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:28:05.589066       1 config.go:200] "Starting service config controller"
	I1018 13:28:05.589154       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:28:05.589212       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:28:05.589240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:28:05.589275       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:28:05.589302       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:28:05.590022       1 config.go:309] "Starting node config controller"
	I1018 13:28:05.592541       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:28:05.592612       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:28:05.690409       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:28:05.690446       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:28:05.690472       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0fb7858e821a9e9519ad092aa53810284d7ccf89de1c209a4f9f631069f7d1c] <==
	E1018 13:27:55.953253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:27:55.953344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:27:55.953437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 13:27:55.953440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:27:55.953500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 13:27:55.953541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:27:55.953577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:27:55.953611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:27:55.953645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:27:55.953679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:27:55.953721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 13:27:55.953756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:27:55.953790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:27:56.009251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 13:27:56.822909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 13:27:56.954770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 13:27:56.967945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:27:56.979961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:27:56.990736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 13:27:57.042020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:27:57.054726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:27:57.081797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:27:57.100002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:27:57.221182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 13:28:00.097889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:27:58 newest-cni-977407 kubelet[1301]: I1018 13:27:58.840393    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88d33cddc4bcfd20d4cff3b474f8af0d-k8s-certs\") pod \"kube-controller-manager-newest-cni-977407\" (UID: \"88d33cddc4bcfd20d4cff3b474f8af0d\") " pod="kube-system/kube-controller-manager-newest-cni-977407"
	Oct 18 13:27:58 newest-cni-977407 kubelet[1301]: I1018 13:27:58.840424    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88d33cddc4bcfd20d4cff3b474f8af0d-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-977407\" (UID: \"88d33cddc4bcfd20d4cff3b474f8af0d\") " pod="kube-system/kube-controller-manager-newest-cni-977407"
	Oct 18 13:27:58 newest-cni-977407 kubelet[1301]: I1018 13:27:58.840445    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/6a3988cdfdd351628cd8b31d06428ff5-etcd-certs\") pod \"etcd-newest-cni-977407\" (UID: \"6a3988cdfdd351628cd8b31d06428ff5\") " pod="kube-system/etcd-newest-cni-977407"
	Oct 18 13:27:59 newest-cni-977407 kubelet[1301]: I1018 13:27:59.501349    1301 apiserver.go:52] "Watching apiserver"
	Oct 18 13:27:59 newest-cni-977407 kubelet[1301]: I1018 13:27:59.539551    1301 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 13:27:59 newest-cni-977407 kubelet[1301]: I1018 13:27:59.662456    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-977407" podStartSLOduration=1.662430096 podStartE2EDuration="1.662430096s" podCreationTimestamp="2025-10-18 13:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:27:59.662045895 +0000 UTC m=+1.299214408" watchObservedRunningTime="2025-10-18 13:27:59.662430096 +0000 UTC m=+1.299598593"
	Oct 18 13:27:59 newest-cni-977407 kubelet[1301]: I1018 13:27:59.662956    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-977407" podStartSLOduration=1.662944556 podStartE2EDuration="1.662944556s" podCreationTimestamp="2025-10-18 13:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:27:59.650240803 +0000 UTC m=+1.287409324" watchObservedRunningTime="2025-10-18 13:27:59.662944556 +0000 UTC m=+1.300113053"
	Oct 18 13:27:59 newest-cni-977407 kubelet[1301]: I1018 13:27:59.687708    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-977407" podStartSLOduration=1.6876876680000001 podStartE2EDuration="1.687687668s" podCreationTimestamp="2025-10-18 13:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:27:59.675871236 +0000 UTC m=+1.313039741" watchObservedRunningTime="2025-10-18 13:27:59.687687668 +0000 UTC m=+1.324856173"
	Oct 18 13:27:59 newest-cni-977407 kubelet[1301]: I1018 13:27:59.702345    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-977407" podStartSLOduration=1.7023265730000001 podStartE2EDuration="1.702326573s" podCreationTimestamp="2025-10-18 13:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:27:59.68888339 +0000 UTC m=+1.326051945" watchObservedRunningTime="2025-10-18 13:27:59.702326573 +0000 UTC m=+1.339495070"
	Oct 18 13:28:03 newest-cni-977407 kubelet[1301]: I1018 13:28:03.025139    1301 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 13:28:03 newest-cni-977407 kubelet[1301]: I1018 13:28:03.026318    1301 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: E1018 13:28:04.040760    1301 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-977407\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-977407' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113184    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-cni-cfg\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113244    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-lib-modules\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113268    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd820b89-8782-4a68-8488-8eae7823ed4e-kube-proxy\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113289    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-xtables-lock\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113321    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64xg\" (UniqueName: \"kubernetes.io/projected/62df2833-c27f-44a7-932f-ddd5e8e4888e-kube-api-access-l64xg\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113347    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd820b89-8782-4a68-8488-8eae7823ed4e-xtables-lock\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113363    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd820b89-8782-4a68-8488-8eae7823ed4e-lib-modules\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.113391    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5wvq\" (UniqueName: \"kubernetes.io/projected/fd820b89-8782-4a68-8488-8eae7823ed4e-kube-api-access-b5wvq\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: I1018 13:28:04.379558    1301 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 13:28:04 newest-cni-977407 kubelet[1301]: W1018 13:28:04.640002    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/crio-ebf127a4334cf560afb772f6bc898925368062bca843ee99232f22fc4edfe316 WatchSource:0}: Error finding container ebf127a4334cf560afb772f6bc898925368062bca843ee99232f22fc4edfe316: Status 404 returned error can't find the container with id ebf127a4334cf560afb772f6bc898925368062bca843ee99232f22fc4edfe316
	Oct 18 13:28:05 newest-cni-977407 kubelet[1301]: W1018 13:28:05.243644    1301 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/crio-a66c934714a0cf6b2094a5dd374abb6b3d81fc5ab68669e9564898d4a0605e2e WatchSource:0}: Error finding container a66c934714a0cf6b2094a5dd374abb6b3d81fc5ab68669e9564898d4a0605e2e: Status 404 returned error can't find the container with id a66c934714a0cf6b2094a5dd374abb6b3d81fc5ab68669e9564898d4a0605e2e
	Oct 18 13:28:05 newest-cni-977407 kubelet[1301]: I1018 13:28:05.688733    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g5rjn" podStartSLOduration=2.688714528 podStartE2EDuration="2.688714528s" podCreationTimestamp="2025-10-18 13:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:28:05.688233907 +0000 UTC m=+7.325402420" watchObservedRunningTime="2025-10-18 13:28:05.688714528 +0000 UTC m=+7.325883025"
	Oct 18 13:28:05 newest-cni-977407 kubelet[1301]: I1018 13:28:05.725801    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4kds" podStartSLOduration=2.725780087 podStartE2EDuration="2.725780087s" podCreationTimestamp="2025-10-18 13:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 13:28:05.724088958 +0000 UTC m=+7.361257471" watchObservedRunningTime="2025-10-18 13:28:05.725780087 +0000 UTC m=+7.362948592"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977407 -n newest-cni-977407
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-977407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h2dzv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner: exit status 1 (87.695385ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h2dzv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-208258 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-208258 --alsologtostderr -v=1: exit status 80 (2.151495961s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-208258 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:28:09.776928 1046181 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:28:09.777195 1046181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:09.777218 1046181 out.go:374] Setting ErrFile to fd 2...
	I1018 13:28:09.777226 1046181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:09.777546 1046181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:28:09.777903 1046181 out.go:368] Setting JSON to false
	I1018 13:28:09.777953 1046181 mustload.go:65] Loading cluster: default-k8s-diff-port-208258
	I1018 13:28:09.778410 1046181 config.go:182] Loaded profile config "default-k8s-diff-port-208258": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:09.778993 1046181 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-208258 --format={{.State.Status}}
	I1018 13:28:09.797363 1046181 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:28:09.797765 1046181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:09.868063 1046181 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:28:09.856494788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:09.868762 1046181 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-208258 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 13:28:09.872644 1046181 out.go:179] * Pausing node default-k8s-diff-port-208258 ... 
	I1018 13:28:09.876422 1046181 host.go:66] Checking if "default-k8s-diff-port-208258" exists ...
	I1018 13:28:09.876745 1046181 ssh_runner.go:195] Run: systemctl --version
	I1018 13:28:09.876805 1046181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-208258
	I1018 13:28:09.923788 1046181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34192 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/default-k8s-diff-port-208258/id_rsa Username:docker}
	I1018 13:28:10.035581 1046181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:28:10.049997 1046181 pause.go:52] kubelet running: true
	I1018 13:28:10.050080 1046181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:28:10.387064 1046181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:28:10.387170 1046181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:28:10.501409 1046181 cri.go:89] found id: "022b2dc043cff21491ef118ca1a12965b94c862353c77578248674069d30db9a"
	I1018 13:28:10.501437 1046181 cri.go:89] found id: "d2309751cb76c67327ef5c673bbdb0238a4d805bc56041835415378c954f574b"
	I1018 13:28:10.501443 1046181 cri.go:89] found id: "aba75db5d58b42c7044b9dd201911ebcffa1a8bb9f631356d353fe9e79e68cb1"
	I1018 13:28:10.501487 1046181 cri.go:89] found id: "cbd95b6e59aef9f61f4dc4386e03f5b8969a97c8349c3fdfd0d9113bd9976674"
	I1018 13:28:10.501492 1046181 cri.go:89] found id: "19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10"
	I1018 13:28:10.501497 1046181 cri.go:89] found id: "76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18"
	I1018 13:28:10.501507 1046181 cri.go:89] found id: "3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed"
	I1018 13:28:10.501511 1046181 cri.go:89] found id: "97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329"
	I1018 13:28:10.501514 1046181 cri.go:89] found id: "037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744"
	I1018 13:28:10.501521 1046181 cri.go:89] found id: "b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	I1018 13:28:10.501529 1046181 cri.go:89] found id: "b2aa237ea826dcc5b0dd657850a19f4ce1133fe69b7801f50f0c87075f91175d"
	I1018 13:28:10.501533 1046181 cri.go:89] found id: ""
	I1018 13:28:10.501612 1046181 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:28:10.545879 1046181 retry.go:31] will retry after 299.703108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:10Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:28:10.846091 1046181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:28:10.861196 1046181 pause.go:52] kubelet running: false
	I1018 13:28:10.861256 1046181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:28:11.089488 1046181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:28:11.089582 1046181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:28:11.161570 1046181 cri.go:89] found id: "022b2dc043cff21491ef118ca1a12965b94c862353c77578248674069d30db9a"
	I1018 13:28:11.161594 1046181 cri.go:89] found id: "d2309751cb76c67327ef5c673bbdb0238a4d805bc56041835415378c954f574b"
	I1018 13:28:11.161599 1046181 cri.go:89] found id: "aba75db5d58b42c7044b9dd201911ebcffa1a8bb9f631356d353fe9e79e68cb1"
	I1018 13:28:11.161603 1046181 cri.go:89] found id: "cbd95b6e59aef9f61f4dc4386e03f5b8969a97c8349c3fdfd0d9113bd9976674"
	I1018 13:28:11.161607 1046181 cri.go:89] found id: "19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10"
	I1018 13:28:11.161610 1046181 cri.go:89] found id: "76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18"
	I1018 13:28:11.161613 1046181 cri.go:89] found id: "3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed"
	I1018 13:28:11.161616 1046181 cri.go:89] found id: "97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329"
	I1018 13:28:11.161619 1046181 cri.go:89] found id: "037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744"
	I1018 13:28:11.161644 1046181 cri.go:89] found id: "b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	I1018 13:28:11.161652 1046181 cri.go:89] found id: "b2aa237ea826dcc5b0dd657850a19f4ce1133fe69b7801f50f0c87075f91175d"
	I1018 13:28:11.161659 1046181 cri.go:89] found id: ""
	I1018 13:28:11.161710 1046181 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:28:11.174877 1046181 retry.go:31] will retry after 399.713426ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:11Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:28:11.575767 1046181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:28:11.589551 1046181 pause.go:52] kubelet running: false
	I1018 13:28:11.589615 1046181 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:28:11.747382 1046181 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:28:11.747462 1046181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:28:11.818927 1046181 cri.go:89] found id: "022b2dc043cff21491ef118ca1a12965b94c862353c77578248674069d30db9a"
	I1018 13:28:11.818952 1046181 cri.go:89] found id: "d2309751cb76c67327ef5c673bbdb0238a4d805bc56041835415378c954f574b"
	I1018 13:28:11.818959 1046181 cri.go:89] found id: "aba75db5d58b42c7044b9dd201911ebcffa1a8bb9f631356d353fe9e79e68cb1"
	I1018 13:28:11.818963 1046181 cri.go:89] found id: "cbd95b6e59aef9f61f4dc4386e03f5b8969a97c8349c3fdfd0d9113bd9976674"
	I1018 13:28:11.818967 1046181 cri.go:89] found id: "19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10"
	I1018 13:28:11.818972 1046181 cri.go:89] found id: "76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18"
	I1018 13:28:11.818976 1046181 cri.go:89] found id: "3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed"
	I1018 13:28:11.818979 1046181 cri.go:89] found id: "97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329"
	I1018 13:28:11.818983 1046181 cri.go:89] found id: "037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744"
	I1018 13:28:11.818989 1046181 cri.go:89] found id: "b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	I1018 13:28:11.818997 1046181 cri.go:89] found id: "b2aa237ea826dcc5b0dd657850a19f4ce1133fe69b7801f50f0c87075f91175d"
	I1018 13:28:11.819000 1046181 cri.go:89] found id: ""
	I1018 13:28:11.819053 1046181 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:28:11.834324 1046181 out.go:203] 
	W1018 13:28:11.837386 1046181 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 13:28:11.837409 1046181 out.go:285] * 
	* 
	W1018 13:28:11.844346 1046181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 13:28:11.847359 1046181 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-208258 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-208258
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-208258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae",
	        "Created": "2025-10-18T13:25:16.393417854Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1039531,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:27:00.934605353Z",
	            "FinishedAt": "2025-10-18T13:26:59.776500244Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/hostname",
	        "HostsPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/hosts",
	        "LogPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae-json.log",
	        "Name": "/default-k8s-diff-port-208258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-208258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-208258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae",
	                "LowerDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-208258",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-208258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-208258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-208258",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-208258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51d50e38e944f6283d0219fdeeafc985b020fb9fb2fbc98d7cf958fc323f55ee",
	            "SandboxKey": "/var/run/docker/netns/51d50e38e944",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-208258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:40:46:85:7d:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "842f84fb2288b37127c8c8891c93fb974e3c77a976754988e22ee941caac1ff0",
	                    "EndpointID": "e963da77c255e5ae5bd55a1c078d2ebd3531e367a0f038cd92dc83485e2d807c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-208258",
	                        "43668e797f9a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258: exit status 2 (452.965725ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-208258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-208258 logs -n 25: (1.374623604s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                                                                                                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ stop    │ -p newest-cni-977407 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-977407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ pause   │ -p default-k8s-diff-port-208258 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:28:09
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:28:09.830721 1046185 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:28:09.831555 1046185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:09.831620 1046185 out.go:374] Setting ErrFile to fd 2...
	I1018 13:28:09.831701 1046185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:09.832914 1046185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:28:09.834822 1046185 out.go:368] Setting JSON to false
	I1018 13:28:09.836213 1046185 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18642,"bootTime":1760775448,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:28:09.836316 1046185 start.go:141] virtualization:  
	I1018 13:28:09.841684 1046185 out.go:179] * [newest-cni-977407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:28:09.845853 1046185 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:28:09.846038 1046185 notify.go:220] Checking for updates...
	I1018 13:28:09.852369 1046185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:28:09.855410 1046185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:28:09.858397 1046185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:28:09.861563 1046185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:28:09.864906 1046185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:28:09.869301 1046185 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:09.870004 1046185 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:28:09.902026 1046185 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:28:09.902165 1046185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:09.988547 1046185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:28:09.97439116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:09.988656 1046185 docker.go:318] overlay module found
	I1018 13:28:09.991916 1046185 out.go:179] * Using the docker driver based on existing profile
	I1018 13:28:09.994783 1046185 start.go:305] selected driver: docker
	I1018 13:28:09.994808 1046185 start.go:925] validating driver "docker" against &{Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:28:09.994918 1046185 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:28:09.995645 1046185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:10.088759 1046185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:28:10.078288594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:10.089463 1046185 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 13:28:10.089499 1046185 cni.go:84] Creating CNI manager for ""
	I1018 13:28:10.089562 1046185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:28:10.089608 1046185 start.go:349] cluster config:
	{Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:28:10.095797 1046185 out.go:179] * Starting "newest-cni-977407" primary control-plane node in "newest-cni-977407" cluster
	I1018 13:28:10.098706 1046185 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:28:10.101744 1046185 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:28:10.104661 1046185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:28:10.104730 1046185 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:28:10.104742 1046185 cache.go:58] Caching tarball of preloaded images
	I1018 13:28:10.104854 1046185 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:28:10.104868 1046185 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:28:10.104986 1046185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/config.json ...
	I1018 13:28:10.105219 1046185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:28:10.132506 1046185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:28:10.132528 1046185 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:28:10.132541 1046185 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:28:10.132570 1046185 start.go:360] acquireMachinesLock for newest-cni-977407: {Name:mk0de410d37c351444ae892375ed0eca81429481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:28:10.132625 1046185 start.go:364] duration metric: took 37.047µs to acquireMachinesLock for "newest-cni-977407"
	I1018 13:28:10.132645 1046185 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:28:10.132651 1046185 fix.go:54] fixHost starting: 
	I1018 13:28:10.132908 1046185 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:10.153394 1046185 fix.go:112] recreateIfNeeded on newest-cni-977407: state=Stopped err=<nil>
	W1018 13:28:10.153437 1046185 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.874502014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.89916342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.899949324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.926174136Z" level=info msg="Created container b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs/dashboard-metrics-scraper" id=ed596a37-5ce2-4ad5-9990-bc7583f35571 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.927251318Z" level=info msg="Starting container: b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d" id=d08c3b53-c938-48f4-b9d1-a26b41801dba name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.930170564Z" level=info msg="Started container" PID=1637 containerID=b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs/dashboard-metrics-scraper id=d08c3b53-c938-48f4-b9d1-a26b41801dba name=/runtime.v1.RuntimeService/StartContainer sandboxID=26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99
	Oct 18 13:27:52 default-k8s-diff-port-208258 conmon[1635]: conmon b8921c1d6ce7dd06dc9b <ninfo>: container 1637 exited with status 1
	Oct 18 13:27:53 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:53.116021714Z" level=info msg="Removing container: 84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d" id=cdb2dbdb-ab56-4671-8e13-e1d810dba3f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:27:53 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:53.129635402Z" level=info msg="Error loading conmon cgroup of container 84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d: cgroup deleted" id=cdb2dbdb-ab56-4671-8e13-e1d810dba3f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:27:53 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:53.139569408Z" level=info msg="Removed container 84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs/dashboard-metrics-scraper" id=cdb2dbdb-ab56-4671-8e13-e1d810dba3f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.836630809Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.845604222Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.84565045Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.84567011Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.850015391Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.850056048Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.850074961Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.853784197Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.85381715Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.853839025Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.862927712Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.862959942Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.862979881Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.871126527Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.871165748Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	b8921c1d6ce7d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   26a272b6a7653       dashboard-metrics-scraper-6ffb444bf9-qwnjs             kubernetes-dashboard
	022b2dc043cff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   af648c75747f9       storage-provisioner                                    kube-system
	b2aa237ea826d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   6f36ba93186ea       kubernetes-dashboard-855c9754f9-5t7tq                  kubernetes-dashboard
	d2309751cb76c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   d864d1dd6785e       coredns-66bc5c9577-2g4gz                               kube-system
	b6c912d920752       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   1d93d19e28348       busybox                                                default
	aba75db5d58b4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   0e6ba50cb04ee       kindnet-4l67c                                          kube-system
	cbd95b6e59aef       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   aebf2881fd686       kube-proxy-q5bvt                                       kube-system
	19fa55260cc0e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   af648c75747f9       storage-provisioner                                    kube-system
	76e53086c2fd2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e94b8eccb6942       kube-scheduler-default-k8s-diff-port-208258            kube-system
	3099cd435aade       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   479a5fbdf367e       kube-controller-manager-default-k8s-diff-port-208258   kube-system
	97cff08426f9b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a39eb72fa8b10       etcd-default-k8s-diff-port-208258                      kube-system
	037c1dcd09818       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   a2bbf995b4fc2       kube-apiserver-default-k8s-diff-port-208258            kube-system
	
	
	==> coredns [d2309751cb76c67327ef5c673bbdb0238a4d805bc56041835415378c954f574b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52884 - 3678 "HINFO IN 8246425369791439278.8834056499321816105. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024177095s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-208258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-208258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-208258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_25_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:25:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-208258
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:28:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:26:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-208258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                248dcf9c-de96-4df7-a92b-ba98e54e1b6e
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-2g4gz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-208258                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-4l67c                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-208258             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-208258    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-q5bvt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-208258             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qwnjs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5t7tq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m28s                  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s                  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m28s                  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m24s                  node-controller  Node default-k8s-diff-port-208258 event: Registered Node default-k8s-diff-port-208258 in Controller
	  Normal   NodeReady                102s                   kubelet          Node default-k8s-diff-port-208258 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node default-k8s-diff-port-208258 event: Registered Node default-k8s-diff-port-208258 in Controller
	
	
	==> dmesg <==
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	[ +43.080166] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329] <==
	{"level":"warn","ts":"2025-10-18T13:27:11.602013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.627216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.655744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.669354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.686502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.705896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.725172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.740127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.763257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.784101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.807248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.832058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.849790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.868932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.886810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.904998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.932016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.943916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.965888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.983417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.010541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.036900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.059795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.070117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.139244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32824","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:28:13 up  5:10,  0 user,  load average: 3.58, 3.05, 2.57
	Linux default-k8s-diff-port-208258 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aba75db5d58b42c7044b9dd201911ebcffa1a8bb9f631356d353fe9e79e68cb1] <==
	I1018 13:27:14.643415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:27:14.643686       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:27:14.643816       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:27:14.643827       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:27:14.643838       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:27:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:27:14.834700       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:27:14.837069       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:27:14.837161       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:27:14.842205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:27:44.834789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:27:44.838378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:27:44.838554       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:27:44.838671       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:27:46.337342       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:27:46.337445       1 metrics.go:72] Registering metrics
	I1018 13:27:46.337548       1 controller.go:711] "Syncing nftables rules"
	I1018 13:27:54.834946       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:27:54.835074       1 main.go:301] handling current node
	I1018 13:28:04.839382       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:28:04.839417       1 main.go:301] handling current node
	
	
	==> kube-apiserver [037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744] <==
	I1018 13:27:13.111062       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:27:13.121497       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:27:13.121519       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:27:13.121525       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:27:13.121531       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:27:13.136304       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:27:13.147097       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1018 13:27:13.153775       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:27:13.155812       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:27:13.155837       1 policy_source.go:240] refreshing policies
	I1018 13:27:13.161042       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:27:13.189992       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 13:27:13.190047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 13:27:13.190058       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 13:27:13.736307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:27:13.915912       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:27:14.113569       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:27:14.543825       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:27:14.674253       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:27:14.733595       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:27:15.220625       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.240.103"}
	I1018 13:27:15.247939       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.169.198"}
	I1018 13:27:17.544835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:27:17.892880       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:27:17.968017       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed] <==
	I1018 13:27:17.500404       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:27:17.501894       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:27:17.502912       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:27:17.505357       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:27:17.508506       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:27:17.512052       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 13:27:17.528368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:27:17.528462       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:27:17.530649       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:27:17.530872       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:27:17.531067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:27:17.531423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:27:17.534791       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 13:27:17.535116       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:27:17.531134       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:27:17.535269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:27:17.531148       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 13:27:17.535415       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 13:27:17.535520       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-208258"
	I1018 13:27:17.535603       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 13:27:17.531160       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 13:27:17.531168       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 13:27:17.537743       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 13:27:17.540013       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:27:17.555184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cbd95b6e59aef9f61f4dc4386e03f5b8969a97c8349c3fdfd0d9113bd9976674] <==
	I1018 13:27:15.254485       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:27:15.361937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:27:15.464068       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:27:15.464109       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 13:27:15.464179       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:27:15.484724       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:27:15.484778       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:27:15.488563       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:27:15.488881       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:27:15.488905       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:27:15.490251       1 config.go:200] "Starting service config controller"
	I1018 13:27:15.490274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:27:15.490292       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:27:15.490296       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:27:15.490306       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:27:15.490310       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:27:15.490953       1 config.go:309] "Starting node config controller"
	I1018 13:27:15.490973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:27:15.490979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:27:15.590341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:27:15.590362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:27:15.590396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18] <==
	I1018 13:27:13.081131       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:27:13.084126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:27:13.099720       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:27:13.099771       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:27:13.100799       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:27:13.101545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 13:27:13.140847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:27:13.140931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:27:13.140982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:27:13.141028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:27:13.141083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:27:13.141169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:27:13.141248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 13:27:13.141295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 13:27:13.141349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:27:13.141408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:27:13.141442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:27:13.141479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 13:27:13.141521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:27:13.141684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:27:13.141725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 13:27:13.141766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 13:27:13.141804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:27:13.141842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1018 13:27:13.200694       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.200987     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr5k5\" (UniqueName: \"kubernetes.io/projected/cd845222-6a66-4024-b059-0be5c4fed286-kube-api-access-dr5k5\") pod \"kubernetes-dashboard-855c9754f9-5t7tq\" (UID: \"cd845222-6a66-4024-b059-0be5c4fed286\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5t7tq"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.201553     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cd845222-6a66-4024-b059-0be5c4fed286-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-5t7tq\" (UID: \"cd845222-6a66-4024-b059-0be5c4fed286\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5t7tq"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.302834     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7dj\" (UniqueName: \"kubernetes.io/projected/55074aa2-d004-475e-9a8c-5e801e899359-kube-api-access-8z7dj\") pod \"dashboard-metrics-scraper-6ffb444bf9-qwnjs\" (UID: \"55074aa2-d004-475e-9a8c-5e801e899359\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.303018     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55074aa2-d004-475e-9a8c-5e801e899359-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qwnjs\" (UID: \"55074aa2-d004-475e-9a8c-5e801e899359\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: W1018 13:27:18.482190     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-6f36ba93186eae28f3dd64053456e1686d84f11d59fb0e4b43c9f63817546fc9 WatchSource:0}: Error finding container 6f36ba93186eae28f3dd64053456e1686d84f11d59fb0e4b43c9f63817546fc9: Status 404 returned error can't find the container with id 6f36ba93186eae28f3dd64053456e1686d84f11d59fb0e4b43c9f63817546fc9
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: W1018 13:27:18.524331     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99 WatchSource:0}: Error finding container 26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99: Status 404 returned error can't find the container with id 26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99
	Oct 18 13:27:32 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:32.033776     777 scope.go:117] "RemoveContainer" containerID="84e0d0badb68fdf03cfa12a3b5dbeb5f5850037df9064ffa7da0efbcff37901d"
	Oct 18 13:27:32 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:32.078429     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5t7tq" podStartSLOduration=7.469116315 podStartE2EDuration="14.078410042s" podCreationTimestamp="2025-10-18 13:27:18 +0000 UTC" firstStartedPulling="2025-10-18 13:27:18.487677924 +0000 UTC m=+10.814394473" lastFinishedPulling="2025-10-18 13:27:25.096971659 +0000 UTC m=+17.423688200" observedRunningTime="2025-10-18 13:27:26.046074086 +0000 UTC m=+18.372790636" watchObservedRunningTime="2025-10-18 13:27:32.078410042 +0000 UTC m=+24.405126575"
	Oct 18 13:27:33 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:33.037875     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:33 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:33.038774     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:33 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:33.039601     777 scope.go:117] "RemoveContainer" containerID="84e0d0badb68fdf03cfa12a3b5dbeb5f5850037df9064ffa7da0efbcff37901d"
	Oct 18 13:27:34 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:34.044141     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:34 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:34.044299     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:38 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:38.454325     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:38 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:38.454955     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:45 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:45.074117     777 scope.go:117] "RemoveContainer" containerID="19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10"
	Oct 18 13:27:52 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:52.871267     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:53 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:53.100003     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:53 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:53.103858     777 scope.go:117] "RemoveContainer" containerID="b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	Oct 18 13:27:53 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:53.104259     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:58 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:58.454714     777 scope.go:117] "RemoveContainer" containerID="b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	Oct 18 13:27:58 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:58.455756     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:28:10 default-k8s-diff-port-208258 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:28:10 default-k8s-diff-port-208258 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:28:10 default-k8s-diff-port-208258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b2aa237ea826dcc5b0dd657850a19f4ce1133fe69b7801f50f0c87075f91175d] <==
	2025/10/18 13:27:25 Starting overwatch
	2025/10/18 13:27:25 Using namespace: kubernetes-dashboard
	2025/10/18 13:27:25 Using in-cluster config to connect to apiserver
	2025/10/18 13:27:25 Using secret token for csrf signing
	2025/10/18 13:27:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:27:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:27:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 13:27:25 Generating JWE encryption key
	2025/10/18 13:27:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:27:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:27:26 Initializing JWE encryption key from synchronized object
	2025/10/18 13:27:26 Creating in-cluster Sidecar client
	2025/10/18 13:27:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:27:26 Serving insecurely on HTTP port: 9090
	2025/10/18 13:27:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [022b2dc043cff21491ef118ca1a12965b94c862353c77578248674069d30db9a] <==
	I1018 13:27:45.192715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 13:27:45.192885       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 13:27:45.207040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:48.662596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:52.923556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:56.521965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:59.575325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:02.598228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:02.605352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:28:02.605793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:28:02.606047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-208258_133592c0-b07e-488e-a9e5-923b723fb6b7!
	I1018 13:28:02.610511       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"884d1ecb-78ac-42d2-b717-b442ddc99282", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-208258_133592c0-b07e-488e-a9e5-923b723fb6b7 became leader
	W1018 13:28:02.612077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:02.630614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:28:02.710740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-208258_133592c0-b07e-488e-a9e5-923b723fb6b7!
	W1018 13:28:04.634317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:04.643215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:06.647192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:06.654209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:08.657471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:08.661839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:10.666959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:10.673836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:12.679218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:12.686382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10] <==
	I1018 13:27:14.600756       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:27:44.611228       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258: exit status 2 (458.796682ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-208258
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-208258:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae",
	        "Created": "2025-10-18T13:25:16.393417854Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1039531,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:27:00.934605353Z",
	            "FinishedAt": "2025-10-18T13:26:59.776500244Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/hostname",
	        "HostsPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/hosts",
	        "LogPath": "/var/lib/docker/containers/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae-json.log",
	        "Name": "/default-k8s-diff-port-208258",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-208258:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-208258",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae",
	                "LowerDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9e2f4661df3625e0eff0add069386c140b7f096f6a441d8d0f785dc5e2e9a05/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-208258",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-208258/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-208258",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-208258",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-208258",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51d50e38e944f6283d0219fdeeafc985b020fb9fb2fbc98d7cf958fc323f55ee",
	            "SandboxKey": "/var/run/docker/netns/51d50e38e944",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-208258": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:40:46:85:7d:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "842f84fb2288b37127c8c8891c93fb974e3c77a976754988e22ee941caac1ff0",
	                    "EndpointID": "e963da77c255e5ae5bd55a1c078d2ebd3531e367a0f038cd92dc83485e2d807c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-208258",
	                        "43668e797f9a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258: exit status 2 (469.839458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-208258 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-208258 logs -n 25: (1.645672556s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ image   │ no-preload-779884 image list --format=json                                                                                                                                                                                                    │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:24 UTC │ 18 Oct 25 13:25 UTC │
	│ pause   │ -p no-preload-779884 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p no-preload-779884                                                                                                                                                                                                                          │ no-preload-779884            │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                                                                                                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ stop    │ -p newest-cni-977407 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-977407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ pause   │ -p default-k8s-diff-port-208258 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:28:09
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:28:09.830721 1046185 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:28:09.831555 1046185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:09.831620 1046185 out.go:374] Setting ErrFile to fd 2...
	I1018 13:28:09.831701 1046185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:09.832914 1046185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:28:09.834822 1046185 out.go:368] Setting JSON to false
	I1018 13:28:09.836213 1046185 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18642,"bootTime":1760775448,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:28:09.836316 1046185 start.go:141] virtualization:  
	I1018 13:28:09.841684 1046185 out.go:179] * [newest-cni-977407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:28:09.845853 1046185 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:28:09.846038 1046185 notify.go:220] Checking for updates...
	I1018 13:28:09.852369 1046185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:28:09.855410 1046185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:28:09.858397 1046185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:28:09.861563 1046185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:28:09.864906 1046185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:28:09.869301 1046185 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:09.870004 1046185 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:28:09.902026 1046185 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:28:09.902165 1046185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:09.988547 1046185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:28:09.97439116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:09.988656 1046185 docker.go:318] overlay module found
	I1018 13:28:09.991916 1046185 out.go:179] * Using the docker driver based on existing profile
	I1018 13:28:09.994783 1046185 start.go:305] selected driver: docker
	I1018 13:28:09.994808 1046185 start.go:925] validating driver "docker" against &{Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:28:09.994918 1046185 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:28:09.995645 1046185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:10.088759 1046185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:28:10.078288594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:10.089463 1046185 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 13:28:10.089499 1046185 cni.go:84] Creating CNI manager for ""
	I1018 13:28:10.089562 1046185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:28:10.089608 1046185 start.go:349] cluster config:
	{Name:newest-cni-977407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-977407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 13:28:10.095797 1046185 out.go:179] * Starting "newest-cni-977407" primary control-plane node in "newest-cni-977407" cluster
	I1018 13:28:10.098706 1046185 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:28:10.101744 1046185 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:28:10.104661 1046185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:28:10.104730 1046185 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:28:10.104742 1046185 cache.go:58] Caching tarball of preloaded images
	I1018 13:28:10.104854 1046185 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:28:10.104868 1046185 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:28:10.104986 1046185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/config.json ...
	I1018 13:28:10.105219 1046185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:28:10.132506 1046185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:28:10.132528 1046185 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:28:10.132541 1046185 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:28:10.132570 1046185 start.go:360] acquireMachinesLock for newest-cni-977407: {Name:mk0de410d37c351444ae892375ed0eca81429481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:28:10.132625 1046185 start.go:364] duration metric: took 37.047µs to acquireMachinesLock for "newest-cni-977407"
	I1018 13:28:10.132645 1046185 start.go:96] Skipping create...Using existing machine configuration
	I1018 13:28:10.132651 1046185 fix.go:54] fixHost starting: 
	I1018 13:28:10.132908 1046185 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:10.153394 1046185 fix.go:112] recreateIfNeeded on newest-cni-977407: state=Stopped err=<nil>
	W1018 13:28:10.153437 1046185 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 13:28:10.156696 1046185 out.go:252] * Restarting existing docker container for "newest-cni-977407" ...
	I1018 13:28:10.156794 1046185 cli_runner.go:164] Run: docker start newest-cni-977407
	I1018 13:28:10.497515 1046185 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:10.524465 1046185 kic.go:430] container "newest-cni-977407" state is running.
	I1018 13:28:10.525073 1046185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-977407
	I1018 13:28:10.557766 1046185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/newest-cni-977407/config.json ...
	I1018 13:28:10.557994 1046185 machine.go:93] provisionDockerMachine start ...
	I1018 13:28:10.558051 1046185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:10.582688 1046185 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:10.583184 1046185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1018 13:28:10.583200 1046185 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:28:10.584102 1046185 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 13:28:13.759734 1046185 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-977407
	
	I1018 13:28:13.759762 1046185 ubuntu.go:182] provisioning hostname "newest-cni-977407"
	I1018 13:28:13.759837 1046185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:13.795713 1046185 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:13.796042 1046185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1018 13:28:13.796114 1046185 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-977407 && echo "newest-cni-977407" | sudo tee /etc/hostname
	I1018 13:28:13.990501 1046185 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-977407
	
	I1018 13:28:13.990598 1046185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:14.022448 1046185 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:14.022767 1046185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I1018 13:28:14.022792 1046185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-977407' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-977407/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-977407' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:28:14.198034 1046185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:28:14.198056 1046185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:28:14.198083 1046185 ubuntu.go:190] setting up certificates
	I1018 13:28:14.198093 1046185 provision.go:84] configureAuth start
	I1018 13:28:14.198166 1046185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-977407
	I1018 13:28:14.229499 1046185 provision.go:143] copyHostCerts
	I1018 13:28:14.229564 1046185 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:28:14.229581 1046185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:28:14.229666 1046185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:28:14.229757 1046185 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:28:14.229762 1046185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:28:14.229789 1046185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:28:14.229838 1046185 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:28:14.229843 1046185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:28:14.229864 1046185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:28:14.229907 1046185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.newest-cni-977407 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-977407]
	I1018 13:28:14.772689 1046185 provision.go:177] copyRemoteCerts
	I1018 13:28:14.772770 1046185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:28:14.772859 1046185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:14.793384 1046185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.874502014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.89916342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.899949324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.926174136Z" level=info msg="Created container b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs/dashboard-metrics-scraper" id=ed596a37-5ce2-4ad5-9990-bc7583f35571 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.927251318Z" level=info msg="Starting container: b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d" id=d08c3b53-c938-48f4-b9d1-a26b41801dba name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:27:52 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:52.930170564Z" level=info msg="Started container" PID=1637 containerID=b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs/dashboard-metrics-scraper id=d08c3b53-c938-48f4-b9d1-a26b41801dba name=/runtime.v1.RuntimeService/StartContainer sandboxID=26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99
	Oct 18 13:27:52 default-k8s-diff-port-208258 conmon[1635]: conmon b8921c1d6ce7dd06dc9b <ninfo>: container 1637 exited with status 1
	Oct 18 13:27:53 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:53.116021714Z" level=info msg="Removing container: 84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d" id=cdb2dbdb-ab56-4671-8e13-e1d810dba3f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:27:53 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:53.129635402Z" level=info msg="Error loading conmon cgroup of container 84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d: cgroup deleted" id=cdb2dbdb-ab56-4671-8e13-e1d810dba3f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:27:53 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:53.139569408Z" level=info msg="Removed container 84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs/dashboard-metrics-scraper" id=cdb2dbdb-ab56-4671-8e13-e1d810dba3f5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.836630809Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.845604222Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.84565045Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.84567011Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.850015391Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.850056048Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.850074961Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.853784197Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.85381715Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.853839025Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.862927712Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.862959942Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.862979881Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.871126527Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 13:27:54 default-k8s-diff-port-208258 crio[649]: time="2025-10-18T13:27:54.871165748Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	b8921c1d6ce7d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   26a272b6a7653       dashboard-metrics-scraper-6ffb444bf9-qwnjs             kubernetes-dashboard
	022b2dc043cff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   af648c75747f9       storage-provisioner                                    kube-system
	b2aa237ea826d       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   6f36ba93186ea       kubernetes-dashboard-855c9754f9-5t7tq                  kubernetes-dashboard
	d2309751cb76c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   d864d1dd6785e       coredns-66bc5c9577-2g4gz                               kube-system
	b6c912d920752       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   1d93d19e28348       busybox                                                default
	aba75db5d58b4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   0e6ba50cb04ee       kindnet-4l67c                                          kube-system
	cbd95b6e59aef       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   aebf2881fd686       kube-proxy-q5bvt                                       kube-system
	19fa55260cc0e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   af648c75747f9       storage-provisioner                                    kube-system
	76e53086c2fd2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e94b8eccb6942       kube-scheduler-default-k8s-diff-port-208258            kube-system
	3099cd435aade       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   479a5fbdf367e       kube-controller-manager-default-k8s-diff-port-208258   kube-system
	97cff08426f9b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a39eb72fa8b10       etcd-default-k8s-diff-port-208258                      kube-system
	037c1dcd09818       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   a2bbf995b4fc2       kube-apiserver-default-k8s-diff-port-208258            kube-system
	
	
	==> coredns [d2309751cb76c67327ef5c673bbdb0238a4d805bc56041835415378c954f574b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52884 - 3678 "HINFO IN 8246425369791439278.8834056499321816105. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024177095s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-208258
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-208258
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=default-k8s-diff-port-208258
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_25_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:25:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-208258
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:28:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:25:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 13:27:44 +0000   Sat, 18 Oct 2025 13:26:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-208258
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                248dcf9c-de96-4df7-a92b-ba98e54e1b6e
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-2g4gz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-208258                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-4l67c                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-208258             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-208258    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-q5bvt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-208258             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qwnjs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5t7tq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   Starting                 60s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-208258 event: Registered Node default-k8s-diff-port-208258 in Controller
	  Normal   NodeReady                104s                   kubelet          Node default-k8s-diff-port-208258 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-208258 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node default-k8s-diff-port-208258 event: Registered Node default-k8s-diff-port-208258 in Controller
	
	
	==> dmesg <==
	[ +24.398912] overlayfs: idmapped layers are currently not supported
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	[ +43.080166] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [97cff08426f9b4750d674978bbf2bd36512b2c9b3ddb5fca8832e24400916329] <==
	{"level":"warn","ts":"2025-10-18T13:27:11.602013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.627216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.655744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.669354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.686502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.705896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.725172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.740127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.763257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.784101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.807248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.832058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.849790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.868932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.886810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.904998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.932016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.943916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.965888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:11.983417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.010541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.036900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.059795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.070117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:27:12.139244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32824","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:28:15 up  5:10,  0 user,  load average: 3.58, 3.05, 2.57
	Linux default-k8s-diff-port-208258 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aba75db5d58b42c7044b9dd201911ebcffa1a8bb9f631356d353fe9e79e68cb1] <==
	I1018 13:27:14.643415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:27:14.643686       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 13:27:14.643816       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:27:14.643827       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:27:14.643838       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:27:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:27:14.834700       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:27:14.837069       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:27:14.837161       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:27:14.842205       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 13:27:44.834789       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 13:27:44.838378       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 13:27:44.838554       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 13:27:44.838671       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 13:27:46.337342       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 13:27:46.337445       1 metrics.go:72] Registering metrics
	I1018 13:27:46.337548       1 controller.go:711] "Syncing nftables rules"
	I1018 13:27:54.834946       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:27:54.835074       1 main.go:301] handling current node
	I1018 13:28:04.839382       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:28:04.839417       1 main.go:301] handling current node
	I1018 13:28:14.842021       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 13:28:14.842129       1 main.go:301] handling current node
	
	
	==> kube-apiserver [037c1dcd09818b19d840d76cf1bce5c7e62d75f7da12f0807c7abbdb70a0a744] <==
	I1018 13:27:13.111062       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 13:27:13.121497       1 aggregator.go:171] initial CRD sync complete...
	I1018 13:27:13.121519       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 13:27:13.121525       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 13:27:13.121531       1 cache.go:39] Caches are synced for autoregister controller
	I1018 13:27:13.136304       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:27:13.147097       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1018 13:27:13.153775       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 13:27:13.155812       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:27:13.155837       1 policy_source.go:240] refreshing policies
	I1018 13:27:13.161042       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:27:13.189992       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 13:27:13.190047       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 13:27:13.190058       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 13:27:13.736307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:27:13.915912       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:27:14.113569       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:27:14.543825       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:27:14.674253       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:27:14.733595       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:27:15.220625       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.240.103"}
	I1018 13:27:15.247939       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.169.198"}
	I1018 13:27:17.544835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:27:17.892880       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:27:17.968017       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3099cd435aadec82c36c1ed527061ac593e3bd4a6cb6c7ecbf7ffab32ce556ed] <==
	I1018 13:27:17.500404       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 13:27:17.501894       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:27:17.502912       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:27:17.505357       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:27:17.508506       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 13:27:17.512052       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 13:27:17.528368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:27:17.528462       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:27:17.530649       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:27:17.530872       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:27:17.531067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:27:17.531423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:27:17.534791       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 13:27:17.535116       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:27:17.531134       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:27:17.535269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:27:17.531148       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 13:27:17.535415       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 13:27:17.535520       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-208258"
	I1018 13:27:17.535603       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 13:27:17.531160       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 13:27:17.531168       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 13:27:17.537743       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 13:27:17.540013       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:27:17.555184       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cbd95b6e59aef9f61f4dc4386e03f5b8969a97c8349c3fdfd0d9113bd9976674] <==
	I1018 13:27:15.254485       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:27:15.361937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:27:15.464068       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:27:15.464109       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 13:27:15.464179       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:27:15.484724       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:27:15.484778       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:27:15.488563       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:27:15.488881       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:27:15.488905       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:27:15.490251       1 config.go:200] "Starting service config controller"
	I1018 13:27:15.490274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:27:15.490292       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:27:15.490296       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:27:15.490306       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:27:15.490310       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:27:15.490953       1 config.go:309] "Starting node config controller"
	I1018 13:27:15.490973       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:27:15.490979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:27:15.590341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:27:15.590362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:27:15.590396       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [76e53086c2fd247abeb1f55181f23154153d2ef51cb8c4020a03e52db1f73a18] <==
	I1018 13:27:13.081131       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:27:13.084126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:27:13.099720       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:27:13.099771       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:27:13.100799       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:27:13.101545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 13:27:13.140847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 13:27:13.140931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 13:27:13.140982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 13:27:13.141028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 13:27:13.141083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 13:27:13.141169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 13:27:13.141248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 13:27:13.141295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 13:27:13.141349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 13:27:13.141408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 13:27:13.141442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 13:27:13.141479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 13:27:13.141521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 13:27:13.141684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 13:27:13.141725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 13:27:13.141766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 13:27:13.141804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 13:27:13.141842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1018 13:27:13.200694       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.200987     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr5k5\" (UniqueName: \"kubernetes.io/projected/cd845222-6a66-4024-b059-0be5c4fed286-kube-api-access-dr5k5\") pod \"kubernetes-dashboard-855c9754f9-5t7tq\" (UID: \"cd845222-6a66-4024-b059-0be5c4fed286\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5t7tq"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.201553     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cd845222-6a66-4024-b059-0be5c4fed286-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-5t7tq\" (UID: \"cd845222-6a66-4024-b059-0be5c4fed286\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5t7tq"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.302834     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7dj\" (UniqueName: \"kubernetes.io/projected/55074aa2-d004-475e-9a8c-5e801e899359-kube-api-access-8z7dj\") pod \"dashboard-metrics-scraper-6ffb444bf9-qwnjs\" (UID: \"55074aa2-d004-475e-9a8c-5e801e899359\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:18.303018     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55074aa2-d004-475e-9a8c-5e801e899359-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qwnjs\" (UID: \"55074aa2-d004-475e-9a8c-5e801e899359\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs"
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: W1018 13:27:18.482190     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-6f36ba93186eae28f3dd64053456e1686d84f11d59fb0e4b43c9f63817546fc9 WatchSource:0}: Error finding container 6f36ba93186eae28f3dd64053456e1686d84f11d59fb0e4b43c9f63817546fc9: Status 404 returned error can't find the container with id 6f36ba93186eae28f3dd64053456e1686d84f11d59fb0e4b43c9f63817546fc9
	Oct 18 13:27:18 default-k8s-diff-port-208258 kubelet[777]: W1018 13:27:18.524331     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/43668e797f9a1b9bad64480b2de0781320f3c7d012cbcd8da4382ec586fcffae/crio-26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99 WatchSource:0}: Error finding container 26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99: Status 404 returned error can't find the container with id 26a272b6a765315cdfa456caa7f47047f32edc05070a8648d5701cad9501ce99
	Oct 18 13:27:32 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:32.033776     777 scope.go:117] "RemoveContainer" containerID="84e0d0badb68fdf03cfa12a3b5dbeb5f5850037df9064ffa7da0efbcff37901d"
	Oct 18 13:27:32 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:32.078429     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5t7tq" podStartSLOduration=7.469116315 podStartE2EDuration="14.078410042s" podCreationTimestamp="2025-10-18 13:27:18 +0000 UTC" firstStartedPulling="2025-10-18 13:27:18.487677924 +0000 UTC m=+10.814394473" lastFinishedPulling="2025-10-18 13:27:25.096971659 +0000 UTC m=+17.423688200" observedRunningTime="2025-10-18 13:27:26.046074086 +0000 UTC m=+18.372790636" watchObservedRunningTime="2025-10-18 13:27:32.078410042 +0000 UTC m=+24.405126575"
	Oct 18 13:27:33 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:33.037875     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:33 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:33.038774     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:33 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:33.039601     777 scope.go:117] "RemoveContainer" containerID="84e0d0badb68fdf03cfa12a3b5dbeb5f5850037df9064ffa7da0efbcff37901d"
	Oct 18 13:27:34 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:34.044141     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:34 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:34.044299     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:38 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:38.454325     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:38 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:38.454955     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:45 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:45.074117     777 scope.go:117] "RemoveContainer" containerID="19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10"
	Oct 18 13:27:52 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:52.871267     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:53 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:53.100003     777 scope.go:117] "RemoveContainer" containerID="84bdc8a24e87e73fe6edfb4e37e9ec991bea9075e8832026b445db43fa34db2d"
	Oct 18 13:27:53 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:53.103858     777 scope.go:117] "RemoveContainer" containerID="b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	Oct 18 13:27:53 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:53.104259     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:27:58 default-k8s-diff-port-208258 kubelet[777]: I1018 13:27:58.454714     777 scope.go:117] "RemoveContainer" containerID="b8921c1d6ce7dd06dc9b8db3c658bfbe80a70a6f43f9de3446917ac7de24aa3d"
	Oct 18 13:27:58 default-k8s-diff-port-208258 kubelet[777]: E1018 13:27:58.455756     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qwnjs_kubernetes-dashboard(55074aa2-d004-475e-9a8c-5e801e899359)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qwnjs" podUID="55074aa2-d004-475e-9a8c-5e801e899359"
	Oct 18 13:28:10 default-k8s-diff-port-208258 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:28:10 default-k8s-diff-port-208258 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:28:10 default-k8s-diff-port-208258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b2aa237ea826dcc5b0dd657850a19f4ce1133fe69b7801f50f0c87075f91175d] <==
	2025/10/18 13:27:25 Starting overwatch
	2025/10/18 13:27:25 Using namespace: kubernetes-dashboard
	2025/10/18 13:27:25 Using in-cluster config to connect to apiserver
	2025/10/18 13:27:25 Using secret token for csrf signing
	2025/10/18 13:27:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 13:27:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 13:27:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 13:27:25 Generating JWE encryption key
	2025/10/18 13:27:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 13:27:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 13:27:26 Initializing JWE encryption key from synchronized object
	2025/10/18 13:27:26 Creating in-cluster Sidecar client
	2025/10/18 13:27:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 13:27:26 Serving insecurely on HTTP port: 9090
	2025/10/18 13:27:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [022b2dc043cff21491ef118ca1a12965b94c862353c77578248674069d30db9a] <==
	W1018 13:27:45.207040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:48.662596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:52.923556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:56.521965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:27:59.575325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:02.598228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:02.605352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:28:02.605793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 13:28:02.606047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-208258_133592c0-b07e-488e-a9e5-923b723fb6b7!
	I1018 13:28:02.610511       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"884d1ecb-78ac-42d2-b717-b442ddc99282", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-208258_133592c0-b07e-488e-a9e5-923b723fb6b7 became leader
	W1018 13:28:02.612077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:02.630614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 13:28:02.710740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-208258_133592c0-b07e-488e-a9e5-923b723fb6b7!
	W1018 13:28:04.634317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:04.643215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:06.647192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:06.654209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:08.657471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:08.661839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:10.666959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:10.673836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:12.679218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:12.686382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:14.689990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 13:28:14.696758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19fa55260cc0ebf3b9a0ca4ecde47e666790b43f694a77383361e90ed39f1d10] <==
	I1018 13:27:14.600756       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 13:27:44.611228       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258: exit status 2 (502.34044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-977407 --alsologtostderr -v=1
E1018 13:28:32.292222  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-977407 --alsologtostderr -v=1: exit status 80 (2.003600822s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-977407 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:28:31.418950 1050325 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:28:31.419208 1050325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:31.419237 1050325 out.go:374] Setting ErrFile to fd 2...
	I1018 13:28:31.419256 1050325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:31.419533 1050325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:28:31.420680 1050325 out.go:368] Setting JSON to false
	I1018 13:28:31.420751 1050325 mustload.go:65] Loading cluster: newest-cni-977407
	I1018 13:28:31.421220 1050325 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:31.421726 1050325 cli_runner.go:164] Run: docker container inspect newest-cni-977407 --format={{.State.Status}}
	I1018 13:28:31.454387 1050325 host.go:66] Checking if "newest-cni-977407" exists ...
	I1018 13:28:31.454726 1050325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:31.540872 1050325 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 13:28:31.530394257 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:31.541717 1050325 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-977407 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 13:28:31.545446 1050325 out.go:179] * Pausing node newest-cni-977407 ... 
	I1018 13:28:31.549178 1050325 host.go:66] Checking if "newest-cni-977407" exists ...
	I1018 13:28:31.549521 1050325 ssh_runner.go:195] Run: systemctl --version
	I1018 13:28:31.549567 1050325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-977407
	I1018 13:28:31.567016 1050325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/newest-cni-977407/id_rsa Username:docker}
	I1018 13:28:31.674668 1050325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:28:31.687978 1050325 pause.go:52] kubelet running: true
	I1018 13:28:31.688058 1050325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:28:31.911960 1050325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:28:31.912111 1050325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:28:32.007644 1050325 cri.go:89] found id: "47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219"
	I1018 13:28:32.007735 1050325 cri.go:89] found id: "1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6"
	I1018 13:28:32.007741 1050325 cri.go:89] found id: "f57b68170e6bd013db59a08eb837e502367ed5c6eed4102abd22b2a73814a34e"
	I1018 13:28:32.007751 1050325 cri.go:89] found id: "2afe57e755a936bddf779258179463776b140bea5c0043c7cf534a24dd203124"
	I1018 13:28:32.007756 1050325 cri.go:89] found id: "b1830215f5796a4f0b3218446759af2fb595fb77aefe7a2cceb1563d3ed52a70"
	I1018 13:28:32.007794 1050325 cri.go:89] found id: "5d84967a25d43b19dd6d736fe8745b5359fb545fe329c23a5a2c2bc56cc81b5d"
	I1018 13:28:32.007812 1050325 cri.go:89] found id: ""
	I1018 13:28:32.007925 1050325 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:28:32.020345 1050325 retry.go:31] will retry after 257.518086ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:28:32.278899 1050325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:28:32.294955 1050325 pause.go:52] kubelet running: false
	I1018 13:28:32.295026 1050325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:28:32.484435 1050325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:28:32.484526 1050325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:28:32.564054 1050325 cri.go:89] found id: "47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219"
	I1018 13:28:32.564079 1050325 cri.go:89] found id: "1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6"
	I1018 13:28:32.564085 1050325 cri.go:89] found id: "f57b68170e6bd013db59a08eb837e502367ed5c6eed4102abd22b2a73814a34e"
	I1018 13:28:32.564089 1050325 cri.go:89] found id: "2afe57e755a936bddf779258179463776b140bea5c0043c7cf534a24dd203124"
	I1018 13:28:32.564093 1050325 cri.go:89] found id: "b1830215f5796a4f0b3218446759af2fb595fb77aefe7a2cceb1563d3ed52a70"
	I1018 13:28:32.564097 1050325 cri.go:89] found id: "5d84967a25d43b19dd6d736fe8745b5359fb545fe329c23a5a2c2bc56cc81b5d"
	I1018 13:28:32.564100 1050325 cri.go:89] found id: ""
	I1018 13:28:32.564150 1050325 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:28:32.586156 1050325 retry.go:31] will retry after 434.111809ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 13:28:33.020566 1050325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:28:33.035840 1050325 pause.go:52] kubelet running: false
	I1018 13:28:33.035908 1050325 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 13:28:33.222140 1050325 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 13:28:33.222224 1050325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 13:28:33.309412 1050325 cri.go:89] found id: "47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219"
	I1018 13:28:33.309432 1050325 cri.go:89] found id: "1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6"
	I1018 13:28:33.309437 1050325 cri.go:89] found id: "f57b68170e6bd013db59a08eb837e502367ed5c6eed4102abd22b2a73814a34e"
	I1018 13:28:33.309440 1050325 cri.go:89] found id: "2afe57e755a936bddf779258179463776b140bea5c0043c7cf534a24dd203124"
	I1018 13:28:33.309443 1050325 cri.go:89] found id: "b1830215f5796a4f0b3218446759af2fb595fb77aefe7a2cceb1563d3ed52a70"
	I1018 13:28:33.309447 1050325 cri.go:89] found id: "5d84967a25d43b19dd6d736fe8745b5359fb545fe329c23a5a2c2bc56cc81b5d"
	I1018 13:28:33.309451 1050325 cri.go:89] found id: ""
	I1018 13:28:33.309498 1050325 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 13:28:33.325919 1050325 out.go:203] 
	W1018 13:28:33.328903 1050325 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T13:28:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 13:28:33.328926 1050325 out.go:285] * 
	* 
	W1018 13:28:33.336004 1050325 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 13:28:33.341116 1050325 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-977407 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-977407
helpers_test.go:243: (dbg) docker inspect newest-cni-977407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8",
	        "Created": "2025-10-18T13:27:32.409614447Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1046384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:28:10.195911009Z",
	            "FinishedAt": "2025-10-18T13:28:09.063431361Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/hosts",
	        "LogPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8-json.log",
	        "Name": "/newest-cni-977407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-977407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-977407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8",
	                "LowerDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-977407",
	                "Source": "/var/lib/docker/volumes/newest-cni-977407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-977407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-977407",
	                "name.minikube.sigs.k8s.io": "newest-cni-977407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e30cde7441c36b0fcea961c0a71d0bdc4da23c07363cabe347d3a6ac34f5c406",
	            "SandboxKey": "/var/run/docker/netns/e30cde7441c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-977407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:f8:51:ba:c1:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6e5d236d58bbb84ba4cff1833e88a247959569bfbd2830bebe94b5f1ed831d0",
	                    "EndpointID": "637c5e98f7457f26bde485ed748314b435e1c688649479badd02d39618da511c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-977407",
	                        "fb38573e5ba6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407: exit status 2 (485.782798ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-977407 logs -n 25
E1018 13:28:34.853627  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-977407 logs -n 25: (1.442940429s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                                                                                                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ stop    │ -p newest-cni-977407 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-977407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ pause   │ -p default-k8s-diff-port-208258 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ delete  │ -p default-k8s-diff-port-208258                                                                                                                                                                                                               │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ delete  │ -p default-k8s-diff-port-208258                                                                                                                                                                                                               │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ start   │ -p auto-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-633218                  │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ image   │ newest-cni-977407 image list --format=json                                                                                                                                                                                                    │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ pause   │ -p newest-cni-977407 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:28:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:28:20.898912 1048954 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:28:20.899144 1048954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:20.899172 1048954 out.go:374] Setting ErrFile to fd 2...
	I1018 13:28:20.899189 1048954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:20.899484 1048954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:28:20.899958 1048954 out.go:368] Setting JSON to false
	I1018 13:28:20.901008 1048954 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18653,"bootTime":1760775448,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:28:20.901099 1048954 start.go:141] virtualization:  
	I1018 13:28:20.907090 1048954 out.go:179] * [auto-633218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:28:20.910478 1048954 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:28:20.910544 1048954 notify.go:220] Checking for updates...
	I1018 13:28:20.917076 1048954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:28:20.920089 1048954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:28:20.923463 1048954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:28:20.926388 1048954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:28:20.929324 1048954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:28:20.932731 1048954 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:20.932887 1048954 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:28:20.973135 1048954 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:28:20.973266 1048954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:21.081001 1048954 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:28:21.06472104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:21.081111 1048954 docker.go:318] overlay module found
	I1018 13:28:21.084267 1048954 out.go:179] * Using the docker driver based on user configuration
	I1018 13:28:21.087223 1048954 start.go:305] selected driver: docker
	I1018 13:28:21.087241 1048954 start.go:925] validating driver "docker" against <nil>
	I1018 13:28:21.087262 1048954 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:28:21.088027 1048954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:21.201392 1048954 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:28:21.188290678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:21.201555 1048954 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 13:28:21.201789 1048954 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:28:21.205619 1048954 out.go:179] * Using Docker driver with root privileges
	I1018 13:28:21.208438 1048954 cni.go:84] Creating CNI manager for ""
	I1018 13:28:21.208509 1048954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:28:21.208523 1048954 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:28:21.208618 1048954 start.go:349] cluster config:
	{Name:auto-633218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-633218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1018 13:28:21.211669 1048954 out.go:179] * Starting "auto-633218" primary control-plane node in "auto-633218" cluster
	I1018 13:28:21.214483 1048954 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:28:21.217479 1048954 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:28:21.220218 1048954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:28:21.220277 1048954 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:28:21.220293 1048954 cache.go:58] Caching tarball of preloaded images
	I1018 13:28:21.220395 1048954 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:28:21.220420 1048954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:28:21.220531 1048954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/config.json ...
	I1018 13:28:21.220554 1048954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/config.json: {Name:mk006ceffddc00b9d781f27a9fbf2398cc6aca13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:28:21.220708 1048954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:28:21.244392 1048954 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:28:21.244427 1048954 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:28:21.244443 1048954 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:28:21.244473 1048954 start.go:360] acquireMachinesLock for auto-633218: {Name:mkf2b486f2f949ee636bdbec3292fb47df044d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:28:21.244574 1048954 start.go:364] duration metric: took 81.265µs to acquireMachinesLock for "auto-633218"
	I1018 13:28:21.244604 1048954 start.go:93] Provisioning new machine with config: &{Name:auto-633218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-633218 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:28:21.244676 1048954 start.go:125] createHost starting for "" (driver="docker")
	I1018 13:28:19.890201 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:28:19.890241 1046185 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:28:19.919062 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:28:19.919098 1046185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:28:19.958524 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:28:19.958562 1046185 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:28:20.028055 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:28:20.028084 1046185 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:28:20.112808 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:28:20.112850 1046185 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:28:20.203344 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:28:20.203386 1046185 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:28:20.226687 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:28:20.226709 1046185 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:28:20.253822 1046185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:28:21.248071 1048954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:28:21.248319 1048954 start.go:159] libmachine.API.Create for "auto-633218" (driver="docker")
	I1018 13:28:21.248360 1048954 client.go:168] LocalClient.Create starting
	I1018 13:28:21.248473 1048954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:28:21.248523 1048954 main.go:141] libmachine: Decoding PEM data...
	I1018 13:28:21.248541 1048954 main.go:141] libmachine: Parsing certificate...
	I1018 13:28:21.248608 1048954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:28:21.248645 1048954 main.go:141] libmachine: Decoding PEM data...
	I1018 13:28:21.248659 1048954 main.go:141] libmachine: Parsing certificate...
	I1018 13:28:21.249076 1048954 cli_runner.go:164] Run: docker network inspect auto-633218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:28:21.285509 1048954 cli_runner.go:211] docker network inspect auto-633218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:28:21.285595 1048954 network_create.go:284] running [docker network inspect auto-633218] to gather additional debugging logs...
	I1018 13:28:21.285615 1048954 cli_runner.go:164] Run: docker network inspect auto-633218
	W1018 13:28:21.321268 1048954 cli_runner.go:211] docker network inspect auto-633218 returned with exit code 1
	I1018 13:28:21.321306 1048954 network_create.go:287] error running [docker network inspect auto-633218]: docker network inspect auto-633218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-633218 not found
	I1018 13:28:21.321320 1048954 network_create.go:289] output of [docker network inspect auto-633218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-633218 not found
	
	** /stderr **
	I1018 13:28:21.321430 1048954 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:28:21.385830 1048954 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:28:21.386224 1048954 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:28:21.386467 1048954 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:28:21.386796 1048954 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b6e5d236d58b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:10:25:d1:8b:e0} reservation:<nil>}
	I1018 13:28:21.387213 1048954 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c9b10}
	I1018 13:28:21.387238 1048954 network_create.go:124] attempt to create docker network auto-633218 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 13:28:21.387300 1048954 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-633218 auto-633218
	I1018 13:28:21.452537 1048954 network_create.go:108] docker network auto-633218 192.168.85.0/24 created
	I1018 13:28:21.452574 1048954 kic.go:121] calculated static IP "192.168.85.2" for the "auto-633218" container
	I1018 13:28:21.452661 1048954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:28:21.473840 1048954 cli_runner.go:164] Run: docker volume create auto-633218 --label name.minikube.sigs.k8s.io=auto-633218 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:28:21.500997 1048954 oci.go:103] Successfully created a docker volume auto-633218
	I1018 13:28:21.501080 1048954 cli_runner.go:164] Run: docker run --rm --name auto-633218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-633218 --entrypoint /usr/bin/test -v auto-633218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:28:22.166675 1048954 oci.go:107] Successfully prepared a docker volume auto-633218
	I1018 13:28:22.166725 1048954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:28:22.166745 1048954 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:28:22.166829 1048954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-633218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 13:28:29.552459 1046185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.032227837s)
	I1018 13:28:29.552520 1046185 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.00912385s)
	I1018 13:28:29.552531 1046185 api_server.go:72] duration metric: took 10.567489068s to wait for apiserver process to appear ...
	I1018 13:28:29.552538 1046185 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:28:29.552554 1046185 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:28:29.552860 1046185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.850602097s)
	I1018 13:28:29.553138 1046185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.299282198s)
	I1018 13:28:29.555993 1046185 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-977407 addons enable metrics-server
	
	I1018 13:28:29.576487 1046185 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:28:29.576512 1046185 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:28:29.591020 1046185 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 13:28:29.593847 1046185 addons.go:514] duration metric: took 10.60844078s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 13:28:30.052656 1046185 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:28:30.078802 1046185 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:28:30.084797 1046185 api_server.go:141] control plane version: v1.34.1
	I1018 13:28:30.084877 1046185 api_server.go:131] duration metric: took 532.331972ms to wait for apiserver health ...
	I1018 13:28:30.084903 1046185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:28:30.096467 1046185 system_pods.go:59] 8 kube-system pods found
	I1018 13:28:30.096561 1046185 system_pods.go:61] "coredns-66bc5c9577-h2dzv" [7bf41590-b205-482b-a509-cca14eef8f53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:28:30.096586 1046185 system_pods.go:61] "etcd-newest-cni-977407" [e959f287-a8d0-4c66-882a-7bf03c0d596b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:28:30.096626 1046185 system_pods.go:61] "kindnet-g5rjn" [62df2833-c27f-44a7-932f-ddd5e8e4888e] Running
	I1018 13:28:30.096657 1046185 system_pods.go:61] "kube-apiserver-newest-cni-977407" [dfc137e0-d480-483e-96e3-85ca7dba3e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:28:30.096684 1046185 system_pods.go:61] "kube-controller-manager-newest-cni-977407" [d43756f2-e9bd-413a-b29f-828c43157138] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:28:30.096709 1046185 system_pods.go:61] "kube-proxy-x4kds" [fd820b89-8782-4a68-8488-8eae7823ed4e] Running
	I1018 13:28:30.096746 1046185 system_pods.go:61] "kube-scheduler-newest-cni-977407" [bbe144ae-f7e7-4fb9-b026-a17a60555951] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:28:30.096776 1046185 system_pods.go:61] "storage-provisioner" [4d216f4e-9951-4993-8149-3f06f900b895] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:28:30.096802 1046185 system_pods.go:74] duration metric: took 11.879857ms to wait for pod list to return data ...
	I1018 13:28:30.096827 1046185 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:28:30.104242 1046185 default_sa.go:45] found service account: "default"
	I1018 13:28:30.104313 1046185 default_sa.go:55] duration metric: took 7.451146ms for default service account to be created ...
	I1018 13:28:30.104348 1046185 kubeadm.go:586] duration metric: took 11.119302289s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 13:28:30.104424 1046185 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:28:30.108508 1046185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:28:30.108592 1046185 node_conditions.go:123] node cpu capacity is 2
	I1018 13:28:30.108620 1046185 node_conditions.go:105] duration metric: took 4.175818ms to run NodePressure ...
	I1018 13:28:30.108659 1046185 start.go:241] waiting for startup goroutines ...
	I1018 13:28:30.108685 1046185 start.go:246] waiting for cluster config update ...
	I1018 13:28:30.108712 1046185 start.go:255] writing updated cluster config ...
	I1018 13:28:30.109052 1046185 ssh_runner.go:195] Run: rm -f paused
	I1018 13:28:30.219124 1046185 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:28:30.223104 1046185 out.go:179] * Done! kubectl is now configured to use "newest-cni-977407" cluster and "default" namespace by default
	I1018 13:28:27.546308 1048954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-633218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.379428549s)
	I1018 13:28:27.546337 1048954 kic.go:203] duration metric: took 5.379589174s to extract preloaded images to volume ...
	W1018 13:28:27.546460 1048954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:28:27.546589 1048954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:28:27.679540 1048954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-633218 --name auto-633218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-633218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-633218 --network auto-633218 --ip 192.168.85.2 --volume auto-633218:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:28:28.193625 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Running}}
	I1018 13:28:28.228282 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Status}}
	I1018 13:28:28.259607 1048954 cli_runner.go:164] Run: docker exec auto-633218 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:28:28.330734 1048954 oci.go:144] the created container "auto-633218" has a running status.
	I1018 13:28:28.330768 1048954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa...
	I1018 13:28:28.823478 1048954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:28:28.853368 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Status}}
	I1018 13:28:28.880090 1048954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:28:28.880115 1048954 kic_runner.go:114] Args: [docker exec --privileged auto-633218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:28:28.980097 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Status}}
	I1018 13:28:29.009653 1048954 machine.go:93] provisionDockerMachine start ...
	I1018 13:28:29.009745 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:29.038173 1048954 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:29.038541 1048954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1018 13:28:29.038551 1048954 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:28:29.039434 1048954 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36076->127.0.0.1:34207: read: connection reset by peer
	
	
	==> CRI-O <==
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.494114063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.51472755Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ed6ac9e9-d117-40fb-8ea1-63ac6cf9d553 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.519077844Z" level=info msg="Ran pod sandbox 8b69f0532eb0065180763ad105c02df59b6fb1e63af6bc7ce1a715718632e5c9 with infra container: kube-system/kube-proxy-x4kds/POD" id=ed6ac9e9-d117-40fb-8ea1-63ac6cf9d553 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.523105279Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f290b2b9-79cb-4139-ab7d-38472d020c06 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.528098108Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9dd33a9f-7191-43de-a7b0-8c2bd9c61540 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.531306916Z" level=info msg="Creating container: kube-system/kube-proxy-x4kds/kube-proxy" id=494fb4a9-72bd-4cfe-8b0d-c0973cde1e19 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.531869072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.547540381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.549565283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.736410318Z" level=info msg="Running pod sandbox: kube-system/kindnet-g5rjn/POD" id=ea89191a-4187-4766-a0e8-4af5eb8c6e12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.736488267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.758774799Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ea89191a-4187-4766-a0e8-4af5eb8c6e12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.781601539Z" level=info msg="Ran pod sandbox 3b13a60cdf0ddab9196d46244f541646f325059b335ebc73f93ea0d5980b65ab with infra container: kube-system/kindnet-g5rjn/POD" id=ea89191a-4187-4766-a0e8-4af5eb8c6e12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.793150397Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d0a13a1d-edcb-4a35-84aa-fff622675db1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.797683864Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4d3ff67b-19b5-460f-9deb-9400d348380c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.802611313Z" level=info msg="Creating container: kube-system/kindnet-g5rjn/kindnet-cni" id=08998fed-74cb-4fd3-9715-467f9f880648 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.803330198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.839413187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.846955296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.897652147Z" level=info msg="Created container 47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219: kube-system/kindnet-g5rjn/kindnet-cni" id=08998fed-74cb-4fd3-9715-467f9f880648 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.898289808Z" level=info msg="Starting container: 47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219" id=a2374e16-ac0b-43b4-a099-dc7e6bc966aa name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.904710862Z" level=info msg="Started container" PID=1065 containerID=47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219 description=kube-system/kindnet-g5rjn/kindnet-cni id=a2374e16-ac0b-43b4-a099-dc7e6bc966aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b13a60cdf0ddab9196d46244f541646f325059b335ebc73f93ea0d5980b65ab
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.987010979Z" level=info msg="Created container 1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6: kube-system/kube-proxy-x4kds/kube-proxy" id=494fb4a9-72bd-4cfe-8b0d-c0973cde1e19 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.990216554Z" level=info msg="Starting container: 1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6" id=a657bed0-ef5b-4e47-97e0-b6c654cb9e65 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:28:28 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.998653787Z" level=info msg="Started container" PID=1058 containerID=1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6 description=kube-system/kube-proxy-x4kds/kube-proxy id=a657bed0-ef5b-4e47-97e0-b6c654cb9e65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b69f0532eb0065180763ad105c02df59b6fb1e63af6bc7ce1a715718632e5c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	47be6b9a3f94a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   3b13a60cdf0dd       kindnet-g5rjn                               kube-system
	1133dcb977e41       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   8b69f0532eb00       kube-proxy-x4kds                            kube-system
	f57b68170e6bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   72a71dfd42260       kube-apiserver-newest-cni-977407            kube-system
	2afe57e755a93       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            1                   ee373a63e0481       kube-scheduler-newest-cni-977407            kube-system
	b1830215f5796       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   95a6a35759920       etcd-newest-cni-977407                      kube-system
	5d84967a25d43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   0daef395975b3       kube-controller-manager-newest-cni-977407   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-977407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-977407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=newest-cni-977407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_27_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:27:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-977407
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:28:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-977407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f89834aa-d14f-47e3-baef-c9c838d135d3
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-977407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-g5rjn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-977407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-977407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-x4kds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-977407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  43s (x8 over 44s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 44s)  kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 44s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-977407 event: Registered Node newest-cni-977407 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-977407 event: Registered Node newest-cni-977407 in Controller
	
	
	==> dmesg <==
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	[ +43.080166] overlayfs: idmapped layers are currently not supported
	[Oct18 13:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b1830215f5796a4f0b3218446759af2fb595fb77aefe7a2cceb1563d3ed52a70] <==
	{"level":"warn","ts":"2025-10-18T13:28:25.294982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.315134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.350409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.379556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.425469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.459915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.492083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.513979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.538229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.611179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.638736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.667233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.747855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47456","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T13:28:27.213072Z","caller":"traceutil/trace.go:172","msg":"trace[1460142882] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"118.943942ms","start":"2025-10-18T13:28:27.094113Z","end":"2025-10-18T13:28:27.213057Z","steps":["trace[1460142882] 'process raft request'  (duration: 118.241551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.364218Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.213004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-977407\" limit:1 ","response":"range_response_count:1 size:5672"}
	{"level":"info","ts":"2025-10-18T13:28:27.364345Z","caller":"traceutil/trace.go:172","msg":"trace[1316027430] range","detail":"{range_begin:/registry/minions/newest-cni-977407; range_end:; response_count:1; response_revision:431; }","duration":"143.357308ms","start":"2025-10-18T13:28:27.220974Z","end":"2025-10-18T13:28:27.364332Z","steps":["trace[1316027430] 'agreement among raft nodes before linearized reading'  (duration: 96.332745ms)","trace[1316027430] 'range keys from in-memory index tree'  (duration: 46.812278ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T13:28:27.364591Z","caller":"traceutil/trace.go:172","msg":"trace[592095908] transaction","detail":"{read_only:false; number_of_response:0; response_revision:431; }","duration":"136.643575ms","start":"2025-10-18T13:28:27.227933Z","end":"2025-10-18T13:28:27.364577Z","steps":["trace[592095908] 'process raft request'  (duration: 89.438513ms)","trace[592095908] 'compare'  (duration: 46.634561ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T13:28:27.366881Z","caller":"traceutil/trace.go:172","msg":"trace[911608483] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"112.309495ms","start":"2025-10-18T13:28:27.254560Z","end":"2025-10-18T13:28:27.366869Z","steps":["trace[911608483] 'process raft request'  (duration: 109.493373ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.367319Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.53857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-g5rjn\" limit:1 ","response":"range_response_count:1 size:5409"}
	{"level":"info","ts":"2025-10-18T13:28:27.367777Z","caller":"traceutil/trace.go:172","msg":"trace[162470990] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-g5rjn; range_end:; response_count:1; response_revision:432; }","duration":"110.997463ms","start":"2025-10-18T13:28:27.256771Z","end":"2025-10-18T13:28:27.367769Z","steps":["trace[162470990] 'agreement among raft nodes before linearized reading'  (duration: 110.051549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.367378Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.214977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-x4kds\" limit:1 ","response":"range_response_count:1 size:5192"}
	{"level":"info","ts":"2025-10-18T13:28:27.367974Z","caller":"traceutil/trace.go:172","msg":"trace[1450751976] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-x4kds; range_end:; response_count:1; response_revision:432; }","duration":"120.022666ms","start":"2025-10-18T13:28:27.247943Z","end":"2025-10-18T13:28:27.367966Z","steps":["trace[1450751976] 'agreement among raft nodes before linearized reading'  (duration: 116.172768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.367677Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.267372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-g5rjn\" limit:1 ","response":"range_response_count:1 size:5409"}
	{"level":"info","ts":"2025-10-18T13:28:27.368083Z","caller":"traceutil/trace.go:172","msg":"trace[1977778883] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-g5rjn; range_end:; response_count:1; response_revision:432; }","duration":"113.678528ms","start":"2025-10-18T13:28:27.254398Z","end":"2025-10-18T13:28:27.368077Z","steps":["trace[1977778883] 'agreement among raft nodes before linearized reading'  (duration: 112.43579ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T13:28:27.560625Z","caller":"traceutil/trace.go:172","msg":"trace[2130343996] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"100.588401ms","start":"2025-10-18T13:28:27.460019Z","end":"2025-10-18T13:28:27.560607Z","steps":["trace[2130343996] 'process raft request'  (duration: 100.496437ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:28:35 up  5:11,  0 user,  load average: 4.91, 3.39, 2.69
	Linux newest-cni-977407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219] <==
	I1018 13:28:28.114557       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:28:28.115428       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:28:28.120624       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:28:28.120657       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:28:28.120673       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:28:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:28:28.319886       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:28:28.319916       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:28:28.319924       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:28:28.320546       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f57b68170e6bd013db59a08eb837e502367ed5c6eed4102abd22b2a73814a34e] <==
	I1018 13:28:27.030612       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:28:27.031946       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 13:28:27.039262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 13:28:27.039449       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 13:28:27.039500       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 13:28:27.039620       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 13:28:27.040501       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 13:28:27.044759       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:28:27.044828       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:28:27.053089       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:28:27.083929       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:28:27.083962       1 policy_source.go:240] refreshing policies
	I1018 13:28:27.093512       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:28:27.247392       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:28:27.611402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:28:28.689325       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:28:29.024944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:28:29.114962       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:28:29.142975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:28:29.393437       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.183.201"}
	I1018 13:28:29.448981       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.212.39"}
	I1018 13:28:31.417513       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:28:31.512659       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:28:31.601808       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:28:31.806256       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [5d84967a25d43b19dd6d736fe8745b5359fb545fe329c23a5a2c2bc56cc81b5d] <==
	I1018 13:28:31.377604       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 13:28:31.379844       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:28:31.381578       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:28:31.381586       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:28:31.381614       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:28:31.384719       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:28:31.386492       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:28:31.386552       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:28:31.386623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:28:31.386624       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:28:31.391749       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:28:31.392023       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:28:31.392215       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:28:31.393373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:28:31.397096       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:28:31.397212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:28:31.403783       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:28:31.404656       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 13:28:31.413923       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 13:28:31.419876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:28:31.419952       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:28:31.419995       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:28:31.420957       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:28:31.446860       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:28:31.456818       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-proxy [1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6] <==
	I1018 13:28:29.349389       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:28:29.704648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:28:29.807720       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:28:29.807784       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:28:29.819743       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:28:29.857035       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:28:29.857091       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:28:29.941939       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:28:29.948034       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:28:29.948067       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:28:29.951129       1 config.go:200] "Starting service config controller"
	I1018 13:28:29.951150       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:28:29.951169       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:28:29.951183       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:28:29.951197       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:28:29.951201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:28:29.952934       1 config.go:309] "Starting node config controller"
	I1018 13:28:29.952954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:28:29.952961       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:28:30.077101       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:28:30.077316       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:28:30.077373       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2afe57e755a936bddf779258179463776b140bea5c0043c7cf534a24dd203124] <==
	I1018 13:28:29.697804       1 serving.go:386] Generated self-signed cert in-memory
	I1018 13:28:31.011198       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:28:31.011254       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:28:31.019082       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:28:31.019174       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 13:28:31.019197       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 13:28:31.019226       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:28:31.048003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:28:31.048117       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:28:31.048248       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:28:31.048285       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:28:31.133651       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 13:28:31.349276       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:28:31.349359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:28:25 newest-cni-977407 kubelet[728]: E1018 13:28:25.090261     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-977407\" not found" node="newest-cni-977407"
	Oct 18 13:28:25 newest-cni-977407 kubelet[728]: E1018 13:28:25.403291     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-977407\" not found" node="newest-cni-977407"
	Oct 18 13:28:26 newest-cni-977407 kubelet[728]: I1018 13:28:26.833799     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.120150     728 apiserver.go:52] "Watching apiserver"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.136751     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.169899     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd820b89-8782-4a68-8488-8eae7823ed4e-lib-modules\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.169973     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd820b89-8782-4a68-8488-8eae7823ed4e-xtables-lock\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.170001     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-xtables-lock\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.170018     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-lib-modules\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.170050     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-cni-cfg\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.219509     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-977407\" already exists" pod="kube-system/kube-controller-manager-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.219568     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.403859     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-977407\" already exists" pod="kube-system/kube-scheduler-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.403996     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.405004     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.418628     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.418858     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.418976     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.419906     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.510466     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-977407\" already exists" pod="kube-system/etcd-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.510508     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.612263     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-977407\" already exists" pod="kube-system/kube-apiserver-newest-cni-977407"
	Oct 18 13:28:31 newest-cni-977407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:28:31 newest-cni-977407 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:28:31 newest-cni-977407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977407 -n newest-cni-977407
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977407 -n newest-cni-977407: exit status 2 (445.190282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-977407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng: exit status 1 (110.584885ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h2dzv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zzfsb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vm8ng" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-977407
helpers_test.go:243: (dbg) docker inspect newest-cni-977407:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8",
	        "Created": "2025-10-18T13:27:32.409614447Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1046384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T13:28:10.195911009Z",
	            "FinishedAt": "2025-10-18T13:28:09.063431361Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/hosts",
	        "LogPath": "/var/lib/docker/containers/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8/fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8-json.log",
	        "Name": "/newest-cni-977407",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-977407:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-977407",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb38573e5ba6ec0125d48d8b31d4a943ad357da8a5f9ecf943eb826f831304c8",
	                "LowerDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370-init/diff:/var/lib/docker/overlay2/48299dba45cdb89e0250a34480f6b62819b0ab86c1bef4a1220a7272328ad42e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02680bcd6a40755e62da827d27459f87fee011e23249372c02d354fe5c0b5370/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-977407",
	                "Source": "/var/lib/docker/volumes/newest-cni-977407/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-977407",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-977407",
	                "name.minikube.sigs.k8s.io": "newest-cni-977407",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e30cde7441c36b0fcea961c0a71d0bdc4da23c07363cabe347d3a6ac34f5c406",
	            "SandboxKey": "/var/run/docker/netns/e30cde7441c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-977407": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:f8:51:ba:c1:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6e5d236d58bbb84ba4cff1833e88a247959569bfbd2830bebe94b5f1ed831d0",
	                    "EndpointID": "637c5e98f7457f26bde485ed748314b435e1c688649479badd02d39618da511c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-977407",
	                        "fb38573e5ba6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407: exit status 2 (435.085976ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-977407 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-977407 logs -n 25: (1.413147117s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-157679                                                                                                                                                                                                               │ disable-driver-mounts-157679 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:25 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p embed-certs-774829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │                     │
	│ stop    │ -p embed-certs-774829 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:25 UTC │ 18 Oct 25 13:26 UTC │
	│ addons  │ enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:26 UTC │
	│ start   │ -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-208258 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-208258 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:26 UTC │ 18 Oct 25 13:27 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ image   │ embed-certs-774829 image list --format=json                                                                                                                                                                                                   │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ pause   │ -p embed-certs-774829 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │                     │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ delete  │ -p embed-certs-774829                                                                                                                                                                                                                         │ embed-certs-774829           │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:27 UTC │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:27 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable metrics-server -p newest-cni-977407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ stop    │ -p newest-cni-977407 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-977407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ pause   │ -p default-k8s-diff-port-208258 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ start   │ -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ delete  │ -p default-k8s-diff-port-208258                                                                                                                                                                                                               │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ delete  │ -p default-k8s-diff-port-208258                                                                                                                                                                                                               │ default-k8s-diff-port-208258 │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ start   │ -p auto-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-633218                  │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	│ image   │ newest-cni-977407 image list --format=json                                                                                                                                                                                                    │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │ 18 Oct 25 13:28 UTC │
	│ pause   │ -p newest-cni-977407 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-977407            │ jenkins │ v1.37.0 │ 18 Oct 25 13:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 13:28:20
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 13:28:20.898912 1048954 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:28:20.899144 1048954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:20.899172 1048954 out.go:374] Setting ErrFile to fd 2...
	I1018 13:28:20.899189 1048954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:28:20.899484 1048954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:28:20.899958 1048954 out.go:368] Setting JSON to false
	I1018 13:28:20.901008 1048954 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18653,"bootTime":1760775448,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:28:20.901099 1048954 start.go:141] virtualization:  
	I1018 13:28:20.907090 1048954 out.go:179] * [auto-633218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:28:20.910478 1048954 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:28:20.910544 1048954 notify.go:220] Checking for updates...
	I1018 13:28:20.917076 1048954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:28:20.920089 1048954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:28:20.923463 1048954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:28:20.926388 1048954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:28:20.929324 1048954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:28:20.932731 1048954 config.go:182] Loaded profile config "newest-cni-977407": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:20.932887 1048954 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:28:20.973135 1048954 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:28:20.973266 1048954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:21.081001 1048954 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:28:21.06472104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:21.081111 1048954 docker.go:318] overlay module found
	I1018 13:28:21.084267 1048954 out.go:179] * Using the docker driver based on user configuration
	I1018 13:28:21.087223 1048954 start.go:305] selected driver: docker
	I1018 13:28:21.087241 1048954 start.go:925] validating driver "docker" against <nil>
	I1018 13:28:21.087262 1048954 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:28:21.088027 1048954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:28:21.201392 1048954 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:28:21.188290678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:28:21.201555 1048954 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 13:28:21.201789 1048954 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 13:28:21.205619 1048954 out.go:179] * Using Docker driver with root privileges
	I1018 13:28:21.208438 1048954 cni.go:84] Creating CNI manager for ""
	I1018 13:28:21.208509 1048954 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 13:28:21.208523 1048954 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 13:28:21.208618 1048954 start.go:349] cluster config:
	{Name:auto-633218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-633218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1018 13:28:21.211669 1048954 out.go:179] * Starting "auto-633218" primary control-plane node in "auto-633218" cluster
	I1018 13:28:21.214483 1048954 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 13:28:21.217479 1048954 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 13:28:21.220218 1048954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:28:21.220277 1048954 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 13:28:21.220293 1048954 cache.go:58] Caching tarball of preloaded images
	I1018 13:28:21.220395 1048954 preload.go:233] Found /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 13:28:21.220420 1048954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 13:28:21.220531 1048954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/config.json ...
	I1018 13:28:21.220554 1048954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/config.json: {Name:mk006ceffddc00b9d781f27a9fbf2398cc6aca13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 13:28:21.220708 1048954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 13:28:21.244392 1048954 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 13:28:21.244427 1048954 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 13:28:21.244443 1048954 cache.go:232] Successfully downloaded all kic artifacts
	I1018 13:28:21.244473 1048954 start.go:360] acquireMachinesLock for auto-633218: {Name:mkf2b486f2f949ee636bdbec3292fb47df044d18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 13:28:21.244574 1048954 start.go:364] duration metric: took 81.265µs to acquireMachinesLock for "auto-633218"
	I1018 13:28:21.244604 1048954 start.go:93] Provisioning new machine with config: &{Name:auto-633218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-633218 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 13:28:21.244676 1048954 start.go:125] createHost starting for "" (driver="docker")
	I1018 13:28:19.890201 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 13:28:19.890241 1046185 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 13:28:19.919062 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 13:28:19.919098 1046185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 13:28:19.958524 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 13:28:19.958562 1046185 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 13:28:20.028055 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 13:28:20.028084 1046185 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 13:28:20.112808 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 13:28:20.112850 1046185 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 13:28:20.203344 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 13:28:20.203386 1046185 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 13:28:20.226687 1046185 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:28:20.226709 1046185 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 13:28:20.253822 1046185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 13:28:21.248071 1048954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 13:28:21.248319 1048954 start.go:159] libmachine.API.Create for "auto-633218" (driver="docker")
	I1018 13:28:21.248360 1048954 client.go:168] LocalClient.Create starting
	I1018 13:28:21.248473 1048954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem
	I1018 13:28:21.248523 1048954 main.go:141] libmachine: Decoding PEM data...
	I1018 13:28:21.248541 1048954 main.go:141] libmachine: Parsing certificate...
	I1018 13:28:21.248608 1048954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem
	I1018 13:28:21.248645 1048954 main.go:141] libmachine: Decoding PEM data...
	I1018 13:28:21.248659 1048954 main.go:141] libmachine: Parsing certificate...
	I1018 13:28:21.249076 1048954 cli_runner.go:164] Run: docker network inspect auto-633218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 13:28:21.285509 1048954 cli_runner.go:211] docker network inspect auto-633218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 13:28:21.285595 1048954 network_create.go:284] running [docker network inspect auto-633218] to gather additional debugging logs...
	I1018 13:28:21.285615 1048954 cli_runner.go:164] Run: docker network inspect auto-633218
	W1018 13:28:21.321268 1048954 cli_runner.go:211] docker network inspect auto-633218 returned with exit code 1
	I1018 13:28:21.321306 1048954 network_create.go:287] error running [docker network inspect auto-633218]: docker network inspect auto-633218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-633218 not found
	I1018 13:28:21.321320 1048954 network_create.go:289] output of [docker network inspect auto-633218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-633218 not found
	
	** /stderr **
	I1018 13:28:21.321430 1048954 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 13:28:21.385830 1048954 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
	I1018 13:28:21.386224 1048954 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1b162987809b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ce:5f:25:ac:cd:2a} reservation:<nil>}
	I1018 13:28:21.386467 1048954 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c986d614dab5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:69:4f:12:e6:e4} reservation:<nil>}
	I1018 13:28:21.386796 1048954 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b6e5d236d58b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:10:25:d1:8b:e0} reservation:<nil>}
	I1018 13:28:21.387213 1048954 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c9b10}
	I1018 13:28:21.387238 1048954 network_create.go:124] attempt to create docker network auto-633218 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 13:28:21.387300 1048954 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-633218 auto-633218
	I1018 13:28:21.452537 1048954 network_create.go:108] docker network auto-633218 192.168.85.0/24 created
	I1018 13:28:21.452574 1048954 kic.go:121] calculated static IP "192.168.85.2" for the "auto-633218" container
	I1018 13:28:21.452661 1048954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 13:28:21.473840 1048954 cli_runner.go:164] Run: docker volume create auto-633218 --label name.minikube.sigs.k8s.io=auto-633218 --label created_by.minikube.sigs.k8s.io=true
	I1018 13:28:21.500997 1048954 oci.go:103] Successfully created a docker volume auto-633218
	I1018 13:28:21.501080 1048954 cli_runner.go:164] Run: docker run --rm --name auto-633218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-633218 --entrypoint /usr/bin/test -v auto-633218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 13:28:22.166675 1048954 oci.go:107] Successfully prepared a docker volume auto-633218
	I1018 13:28:22.166725 1048954 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 13:28:22.166745 1048954 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 13:28:22.166829 1048954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-633218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 13:28:29.552459 1046185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.032227837s)
	I1018 13:28:29.552520 1046185 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.00912385s)
	I1018 13:28:29.552531 1046185 api_server.go:72] duration metric: took 10.567489068s to wait for apiserver process to appear ...
	I1018 13:28:29.552538 1046185 api_server.go:88] waiting for apiserver healthz status ...
	I1018 13:28:29.552554 1046185 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:28:29.552860 1046185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.850602097s)
	I1018 13:28:29.553138 1046185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.299282198s)
	I1018 13:28:29.555993 1046185 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-977407 addons enable metrics-server
	
	I1018 13:28:29.576487 1046185 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 13:28:29.576512 1046185 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 13:28:29.591020 1046185 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 13:28:29.593847 1046185 addons.go:514] duration metric: took 10.60844078s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 13:28:30.052656 1046185 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 13:28:30.078802 1046185 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 13:28:30.084797 1046185 api_server.go:141] control plane version: v1.34.1
	I1018 13:28:30.084877 1046185 api_server.go:131] duration metric: took 532.331972ms to wait for apiserver health ...
	I1018 13:28:30.084903 1046185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 13:28:30.096467 1046185 system_pods.go:59] 8 kube-system pods found
	I1018 13:28:30.096561 1046185 system_pods.go:61] "coredns-66bc5c9577-h2dzv" [7bf41590-b205-482b-a509-cca14eef8f53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:28:30.096586 1046185 system_pods.go:61] "etcd-newest-cni-977407" [e959f287-a8d0-4c66-882a-7bf03c0d596b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 13:28:30.096626 1046185 system_pods.go:61] "kindnet-g5rjn" [62df2833-c27f-44a7-932f-ddd5e8e4888e] Running
	I1018 13:28:30.096657 1046185 system_pods.go:61] "kube-apiserver-newest-cni-977407" [dfc137e0-d480-483e-96e3-85ca7dba3e3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 13:28:30.096684 1046185 system_pods.go:61] "kube-controller-manager-newest-cni-977407" [d43756f2-e9bd-413a-b29f-828c43157138] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 13:28:30.096709 1046185 system_pods.go:61] "kube-proxy-x4kds" [fd820b89-8782-4a68-8488-8eae7823ed4e] Running
	I1018 13:28:30.096746 1046185 system_pods.go:61] "kube-scheduler-newest-cni-977407" [bbe144ae-f7e7-4fb9-b026-a17a60555951] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 13:28:30.096776 1046185 system_pods.go:61] "storage-provisioner" [4d216f4e-9951-4993-8149-3f06f900b895] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 13:28:30.096802 1046185 system_pods.go:74] duration metric: took 11.879857ms to wait for pod list to return data ...
	I1018 13:28:30.096827 1046185 default_sa.go:34] waiting for default service account to be created ...
	I1018 13:28:30.104242 1046185 default_sa.go:45] found service account: "default"
	I1018 13:28:30.104313 1046185 default_sa.go:55] duration metric: took 7.451146ms for default service account to be created ...
	I1018 13:28:30.104348 1046185 kubeadm.go:586] duration metric: took 11.119302289s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 13:28:30.104424 1046185 node_conditions.go:102] verifying NodePressure condition ...
	I1018 13:28:30.108508 1046185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 13:28:30.108592 1046185 node_conditions.go:123] node cpu capacity is 2
	I1018 13:28:30.108620 1046185 node_conditions.go:105] duration metric: took 4.175818ms to run NodePressure ...
	I1018 13:28:30.108659 1046185 start.go:241] waiting for startup goroutines ...
	I1018 13:28:30.108685 1046185 start.go:246] waiting for cluster config update ...
	I1018 13:28:30.108712 1046185 start.go:255] writing updated cluster config ...
	I1018 13:28:30.109052 1046185 ssh_runner.go:195] Run: rm -f paused
	I1018 13:28:30.219124 1046185 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 13:28:30.223104 1046185 out.go:179] * Done! kubectl is now configured to use "newest-cni-977407" cluster and "default" namespace by default
	I1018 13:28:27.546308 1048954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-633218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.379428549s)
	I1018 13:28:27.546337 1048954 kic.go:203] duration metric: took 5.379589174s to extract preloaded images to volume ...
	W1018 13:28:27.546460 1048954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 13:28:27.546589 1048954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 13:28:27.679540 1048954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-633218 --name auto-633218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-633218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-633218 --network auto-633218 --ip 192.168.85.2 --volume auto-633218:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 13:28:28.193625 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Running}}
	I1018 13:28:28.228282 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Status}}
	I1018 13:28:28.259607 1048954 cli_runner.go:164] Run: docker exec auto-633218 stat /var/lib/dpkg/alternatives/iptables
	I1018 13:28:28.330734 1048954 oci.go:144] the created container "auto-633218" has a running status.
	I1018 13:28:28.330768 1048954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa...
	I1018 13:28:28.823478 1048954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 13:28:28.853368 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Status}}
	I1018 13:28:28.880090 1048954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 13:28:28.880115 1048954 kic_runner.go:114] Args: [docker exec --privileged auto-633218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 13:28:28.980097 1048954 cli_runner.go:164] Run: docker container inspect auto-633218 --format={{.State.Status}}
	I1018 13:28:29.009653 1048954 machine.go:93] provisionDockerMachine start ...
	I1018 13:28:29.009745 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:29.038173 1048954 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:29.038541 1048954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1018 13:28:29.038551 1048954 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 13:28:29.039434 1048954 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36076->127.0.0.1:34207: read: connection reset by peer
	I1018 13:28:32.187547 1048954 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-633218
	
	I1018 13:28:32.187574 1048954 ubuntu.go:182] provisioning hostname "auto-633218"
	I1018 13:28:32.187642 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:32.205947 1048954 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:32.206260 1048954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1018 13:28:32.206277 1048954 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-633218 && echo "auto-633218" | sudo tee /etc/hostname
	I1018 13:28:32.372336 1048954 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-633218
	
	I1018 13:28:32.372411 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:32.405603 1048954 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:32.405973 1048954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1018 13:28:32.405998 1048954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-633218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-633218/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-633218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 13:28:32.573915 1048954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 13:28:32.573940 1048954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-834184/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-834184/.minikube}
	I1018 13:28:32.573969 1048954 ubuntu.go:190] setting up certificates
	I1018 13:28:32.573979 1048954 provision.go:84] configureAuth start
	I1018 13:28:32.574055 1048954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-633218
	I1018 13:28:32.593257 1048954 provision.go:143] copyHostCerts
	I1018 13:28:32.593329 1048954 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem, removing ...
	I1018 13:28:32.593353 1048954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem
	I1018 13:28:32.593432 1048954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/ca.pem (1082 bytes)
	I1018 13:28:32.593525 1048954 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem, removing ...
	I1018 13:28:32.593536 1048954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem
	I1018 13:28:32.593564 1048954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/cert.pem (1123 bytes)
	I1018 13:28:32.593621 1048954 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem, removing ...
	I1018 13:28:32.593635 1048954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem
	I1018 13:28:32.593660 1048954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-834184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-834184/.minikube/key.pem (1675 bytes)
	I1018 13:28:32.593710 1048954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca-key.pem org=jenkins.auto-633218 san=[127.0.0.1 192.168.85.2 auto-633218 localhost minikube]
	I1018 13:28:33.236708 1048954 provision.go:177] copyRemoteCerts
	I1018 13:28:33.236778 1048954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 13:28:33.236840 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:33.258879 1048954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa Username:docker}
	I1018 13:28:33.369872 1048954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1018 13:28:33.397107 1048954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 13:28:33.424285 1048954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 13:28:33.449631 1048954 provision.go:87] duration metric: took 875.624629ms to configureAuth
	I1018 13:28:33.449657 1048954 ubuntu.go:206] setting minikube options for container-runtime
	I1018 13:28:33.449838 1048954 config.go:182] Loaded profile config "auto-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:28:33.449945 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:33.489417 1048954 main.go:141] libmachine: Using SSH client type: native
	I1018 13:28:33.489726 1048954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 34207 <nil> <nil>}
	I1018 13:28:33.489746 1048954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 13:28:33.803040 1048954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 13:28:33.803062 1048954 machine.go:96] duration metric: took 4.793389887s to provisionDockerMachine
	I1018 13:28:33.803072 1048954 client.go:171] duration metric: took 12.554701624s to LocalClient.Create
	I1018 13:28:33.803083 1048954 start.go:167] duration metric: took 12.554765797s to libmachine.API.Create "auto-633218"
	I1018 13:28:33.803090 1048954 start.go:293] postStartSetup for "auto-633218" (driver="docker")
	I1018 13:28:33.803099 1048954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 13:28:33.803170 1048954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 13:28:33.803209 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:33.835099 1048954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa Username:docker}
	I1018 13:28:33.949505 1048954 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 13:28:33.953436 1048954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 13:28:33.953464 1048954 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 13:28:33.953476 1048954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/addons for local assets ...
	I1018 13:28:33.953538 1048954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-834184/.minikube/files for local assets ...
	I1018 13:28:33.953637 1048954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem -> 8360862.pem in /etc/ssl/certs
	I1018 13:28:33.953748 1048954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 13:28:33.964451 1048954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/ssl/certs/8360862.pem --> /etc/ssl/certs/8360862.pem (1708 bytes)
	I1018 13:28:33.993653 1048954 start.go:296] duration metric: took 190.549764ms for postStartSetup
	I1018 13:28:33.994020 1048954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-633218
	I1018 13:28:34.034517 1048954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/config.json ...
	I1018 13:28:34.034808 1048954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:28:34.034849 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:34.065066 1048954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa Username:docker}
	I1018 13:28:34.173326 1048954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 13:28:34.178578 1048954 start.go:128] duration metric: took 12.933886649s to createHost
	I1018 13:28:34.178603 1048954 start.go:83] releasing machines lock for "auto-633218", held for 12.934015636s
	I1018 13:28:34.178683 1048954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-633218
	I1018 13:28:34.204221 1048954 ssh_runner.go:195] Run: cat /version.json
	I1018 13:28:34.204302 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:34.204623 1048954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 13:28:34.204690 1048954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-633218
	I1018 13:28:34.251817 1048954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa Username:docker}
	I1018 13:28:34.260216 1048954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34207 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/auto-633218/id_rsa Username:docker}
	I1018 13:28:34.491327 1048954 ssh_runner.go:195] Run: systemctl --version
	I1018 13:28:34.498752 1048954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 13:28:34.574694 1048954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 13:28:34.581458 1048954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 13:28:34.581547 1048954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 13:28:34.616744 1048954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 13:28:34.616769 1048954 start.go:495] detecting cgroup driver to use...
	I1018 13:28:34.616813 1048954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 13:28:34.616869 1048954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 13:28:34.641439 1048954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 13:28:34.659887 1048954 docker.go:218] disabling cri-docker service (if available) ...
	I1018 13:28:34.659954 1048954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 13:28:34.684745 1048954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 13:28:34.712117 1048954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 13:28:34.915875 1048954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 13:28:35.109811 1048954 docker.go:234] disabling docker service ...
	I1018 13:28:35.109968 1048954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 13:28:35.142000 1048954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 13:28:35.159366 1048954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 13:28:35.325395 1048954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 13:28:35.509714 1048954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 13:28:35.532126 1048954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 13:28:35.554535 1048954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 13:28:35.554614 1048954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.565513 1048954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 13:28:35.565593 1048954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.577607 1048954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.588152 1048954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.597920 1048954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 13:28:35.606961 1048954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.616438 1048954 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.636808 1048954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 13:28:35.647830 1048954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 13:28:35.660977 1048954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 13:28:35.668844 1048954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 13:28:35.851735 1048954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 13:28:36.029414 1048954 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 13:28:36.029566 1048954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 13:28:36.040110 1048954 start.go:563] Will wait 60s for crictl version
	I1018 13:28:36.040189 1048954 ssh_runner.go:195] Run: which crictl
	I1018 13:28:36.048399 1048954 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 13:28:36.091093 1048954 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 13:28:36.091181 1048954 ssh_runner.go:195] Run: crio --version
	I1018 13:28:36.130515 1048954 ssh_runner.go:195] Run: crio --version
	I1018 13:28:36.184146 1048954 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.494114063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.51472755Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ed6ac9e9-d117-40fb-8ea1-63ac6cf9d553 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.519077844Z" level=info msg="Ran pod sandbox 8b69f0532eb0065180763ad105c02df59b6fb1e63af6bc7ce1a715718632e5c9 with infra container: kube-system/kube-proxy-x4kds/POD" id=ed6ac9e9-d117-40fb-8ea1-63ac6cf9d553 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.523105279Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f290b2b9-79cb-4139-ab7d-38472d020c06 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.528098108Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9dd33a9f-7191-43de-a7b0-8c2bd9c61540 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.531306916Z" level=info msg="Creating container: kube-system/kube-proxy-x4kds/kube-proxy" id=494fb4a9-72bd-4cfe-8b0d-c0973cde1e19 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.531869072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.547540381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.549565283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.736410318Z" level=info msg="Running pod sandbox: kube-system/kindnet-g5rjn/POD" id=ea89191a-4187-4766-a0e8-4af5eb8c6e12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.736488267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.758774799Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ea89191a-4187-4766-a0e8-4af5eb8c6e12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.781601539Z" level=info msg="Ran pod sandbox 3b13a60cdf0ddab9196d46244f541646f325059b335ebc73f93ea0d5980b65ab with infra container: kube-system/kindnet-g5rjn/POD" id=ea89191a-4187-4766-a0e8-4af5eb8c6e12 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.793150397Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d0a13a1d-edcb-4a35-84aa-fff622675db1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.797683864Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4d3ff67b-19b5-460f-9deb-9400d348380c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.802611313Z" level=info msg="Creating container: kube-system/kindnet-g5rjn/kindnet-cni" id=08998fed-74cb-4fd3-9715-467f9f880648 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.803330198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.839413187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.846955296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.897652147Z" level=info msg="Created container 47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219: kube-system/kindnet-g5rjn/kindnet-cni" id=08998fed-74cb-4fd3-9715-467f9f880648 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.898289808Z" level=info msg="Starting container: 47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219" id=a2374e16-ac0b-43b4-a099-dc7e6bc966aa name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.904710862Z" level=info msg="Started container" PID=1065 containerID=47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219 description=kube-system/kindnet-g5rjn/kindnet-cni id=a2374e16-ac0b-43b4-a099-dc7e6bc966aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b13a60cdf0ddab9196d46244f541646f325059b335ebc73f93ea0d5980b65ab
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.987010979Z" level=info msg="Created container 1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6: kube-system/kube-proxy-x4kds/kube-proxy" id=494fb4a9-72bd-4cfe-8b0d-c0973cde1e19 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 13:28:27 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.990216554Z" level=info msg="Starting container: 1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6" id=a657bed0-ef5b-4e47-97e0-b6c654cb9e65 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 13:28:28 newest-cni-977407 crio[614]: time="2025-10-18T13:28:27.998653787Z" level=info msg="Started container" PID=1058 containerID=1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6 description=kube-system/kube-proxy-x4kds/kube-proxy id=a657bed0-ef5b-4e47-97e0-b6c654cb9e65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b69f0532eb0065180763ad105c02df59b6fb1e63af6bc7ce1a715718632e5c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	47be6b9a3f94a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   3b13a60cdf0dd       kindnet-g5rjn                               kube-system
	1133dcb977e41       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   8b69f0532eb00       kube-proxy-x4kds                            kube-system
	f57b68170e6bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            1                   72a71dfd42260       kube-apiserver-newest-cni-977407            kube-system
	2afe57e755a93       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            1                   ee373a63e0481       kube-scheduler-newest-cni-977407            kube-system
	b1830215f5796       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      1                   95a6a35759920       etcd-newest-cni-977407                      kube-system
	5d84967a25d43       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   1                   0daef395975b3       kube-controller-manager-newest-cni-977407   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-977407
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-977407
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=newest-cni-977407
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T13_27_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 13:27:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-977407
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 13:28:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 13:28:27 +0000   Sat, 18 Oct 2025 13:27:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-977407
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f89834aa-d14f-47e3-baef-c9c838d135d3
	  Boot ID:                    b42606f0-b77a-4ab9-9450-63f9e79403e9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-977407                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-g5rjn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-977407             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-977407    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-x4kds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-977407             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   Starting                 47s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 47s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 47s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 47s)  kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 47s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     39s                kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  39s                kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s                kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           35s                node-controller  Node newest-cni-977407 event: Registered Node newest-cni-977407 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-977407 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-977407 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-977407 event: Registered Node newest-cni-977407 in Controller
	
	
	==> dmesg <==
	[  +2.054181] overlayfs: idmapped layers are currently not supported
	[Oct18 13:04] overlayfs: idmapped layers are currently not supported
	[Oct18 13:05] overlayfs: idmapped layers are currently not supported
	[ +44.860774] overlayfs: idmapped layers are currently not supported
	[Oct18 13:06] overlayfs: idmapped layers are currently not supported
	[Oct18 13:07] overlayfs: idmapped layers are currently not supported
	[Oct18 13:08] overlayfs: idmapped layers are currently not supported
	[Oct18 13:11] overlayfs: idmapped layers are currently not supported
	[Oct18 13:12] overlayfs: idmapped layers are currently not supported
	[Oct18 13:13] overlayfs: idmapped layers are currently not supported
	[Oct18 13:16] overlayfs: idmapped layers are currently not supported
	[Oct18 13:18] overlayfs: idmapped layers are currently not supported
	[ +22.447718] overlayfs: idmapped layers are currently not supported
	[Oct18 13:19] overlayfs: idmapped layers are currently not supported
	[ +17.234503] overlayfs: idmapped layers are currently not supported
	[Oct18 13:20] overlayfs: idmapped layers are currently not supported
	[Oct18 13:21] overlayfs: idmapped layers are currently not supported
	[Oct18 13:22] overlayfs: idmapped layers are currently not supported
	[Oct18 13:23] overlayfs: idmapped layers are currently not supported
	[Oct18 13:24] overlayfs: idmapped layers are currently not supported
	[Oct18 13:25] overlayfs: idmapped layers are currently not supported
	[Oct18 13:26] overlayfs: idmapped layers are currently not supported
	[Oct18 13:27] overlayfs: idmapped layers are currently not supported
	[ +43.080166] overlayfs: idmapped layers are currently not supported
	[Oct18 13:28] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b1830215f5796a4f0b3218446759af2fb595fb77aefe7a2cceb1563d3ed52a70] <==
	{"level":"warn","ts":"2025-10-18T13:28:25.294982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.315134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.350409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.379556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.425469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.459915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.492083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.513979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.538229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.611179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.638736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.667233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T13:28:25.747855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47456","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T13:28:27.213072Z","caller":"traceutil/trace.go:172","msg":"trace[1460142882] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"118.943942ms","start":"2025-10-18T13:28:27.094113Z","end":"2025-10-18T13:28:27.213057Z","steps":["trace[1460142882] 'process raft request'  (duration: 118.241551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.364218Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.213004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/newest-cni-977407\" limit:1 ","response":"range_response_count:1 size:5672"}
	{"level":"info","ts":"2025-10-18T13:28:27.364345Z","caller":"traceutil/trace.go:172","msg":"trace[1316027430] range","detail":"{range_begin:/registry/minions/newest-cni-977407; range_end:; response_count:1; response_revision:431; }","duration":"143.357308ms","start":"2025-10-18T13:28:27.220974Z","end":"2025-10-18T13:28:27.364332Z","steps":["trace[1316027430] 'agreement among raft nodes before linearized reading'  (duration: 96.332745ms)","trace[1316027430] 'range keys from in-memory index tree'  (duration: 46.812278ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T13:28:27.364591Z","caller":"traceutil/trace.go:172","msg":"trace[592095908] transaction","detail":"{read_only:false; number_of_response:0; response_revision:431; }","duration":"136.643575ms","start":"2025-10-18T13:28:27.227933Z","end":"2025-10-18T13:28:27.364577Z","steps":["trace[592095908] 'process raft request'  (duration: 89.438513ms)","trace[592095908] 'compare'  (duration: 46.634561ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T13:28:27.366881Z","caller":"traceutil/trace.go:172","msg":"trace[911608483] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"112.309495ms","start":"2025-10-18T13:28:27.254560Z","end":"2025-10-18T13:28:27.366869Z","steps":["trace[911608483] 'process raft request'  (duration: 109.493373ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.367319Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.53857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-g5rjn\" limit:1 ","response":"range_response_count:1 size:5409"}
	{"level":"info","ts":"2025-10-18T13:28:27.367777Z","caller":"traceutil/trace.go:172","msg":"trace[162470990] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-g5rjn; range_end:; response_count:1; response_revision:432; }","duration":"110.997463ms","start":"2025-10-18T13:28:27.256771Z","end":"2025-10-18T13:28:27.367769Z","steps":["trace[162470990] 'agreement among raft nodes before linearized reading'  (duration: 110.051549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.367378Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.214977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-x4kds\" limit:1 ","response":"range_response_count:1 size:5192"}
	{"level":"info","ts":"2025-10-18T13:28:27.367974Z","caller":"traceutil/trace.go:172","msg":"trace[1450751976] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-x4kds; range_end:; response_count:1; response_revision:432; }","duration":"120.022666ms","start":"2025-10-18T13:28:27.247943Z","end":"2025-10-18T13:28:27.367966Z","steps":["trace[1450751976] 'agreement among raft nodes before linearized reading'  (duration: 116.172768ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T13:28:27.367677Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.267372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-g5rjn\" limit:1 ","response":"range_response_count:1 size:5409"}
	{"level":"info","ts":"2025-10-18T13:28:27.368083Z","caller":"traceutil/trace.go:172","msg":"trace[1977778883] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-g5rjn; range_end:; response_count:1; response_revision:432; }","duration":"113.678528ms","start":"2025-10-18T13:28:27.254398Z","end":"2025-10-18T13:28:27.368077Z","steps":["trace[1977778883] 'agreement among raft nodes before linearized reading'  (duration: 112.43579ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T13:28:27.560625Z","caller":"traceutil/trace.go:172","msg":"trace[2130343996] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"100.588401ms","start":"2025-10-18T13:28:27.460019Z","end":"2025-10-18T13:28:27.560607Z","steps":["trace[2130343996] 'process raft request'  (duration: 100.496437ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:28:37 up  5:11,  0 user,  load average: 4.91, 3.39, 2.69
	Linux newest-cni-977407 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [47be6b9a3f94ac852554b30b7975c885ee5839a3ca109b0782b8e9b422aed219] <==
	I1018 13:28:28.114557       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 13:28:28.115428       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 13:28:28.120624       1 main.go:148] setting mtu 1500 for CNI 
	I1018 13:28:28.120657       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 13:28:28.120673       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T13:28:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 13:28:28.319886       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 13:28:28.319916       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 13:28:28.319924       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 13:28:28.320546       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f57b68170e6bd013db59a08eb837e502367ed5c6eed4102abd22b2a73814a34e] <==
	I1018 13:28:27.030612       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 13:28:27.031946       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 13:28:27.039262       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 13:28:27.039449       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 13:28:27.039500       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 13:28:27.039620       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 13:28:27.040501       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 13:28:27.044759       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 13:28:27.044828       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 13:28:27.053089       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 13:28:27.083929       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 13:28:27.083962       1 policy_source.go:240] refreshing policies
	I1018 13:28:27.093512       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 13:28:27.247392       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 13:28:27.611402       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 13:28:28.689325       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 13:28:29.024944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 13:28:29.114962       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 13:28:29.142975       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 13:28:29.393437       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.183.201"}
	I1018 13:28:29.448981       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.212.39"}
	I1018 13:28:31.417513       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 13:28:31.512659       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 13:28:31.601808       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 13:28:31.806256       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [5d84967a25d43b19dd6d736fe8745b5359fb545fe329c23a5a2c2bc56cc81b5d] <==
	I1018 13:28:31.377604       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 13:28:31.379844       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 13:28:31.381578       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 13:28:31.381586       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 13:28:31.381614       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 13:28:31.384719       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 13:28:31.386492       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 13:28:31.386552       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 13:28:31.386623       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 13:28:31.386624       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 13:28:31.391749       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 13:28:31.392023       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 13:28:31.392215       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:28:31.393373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 13:28:31.397096       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 13:28:31.397212       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 13:28:31.403783       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 13:28:31.404656       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 13:28:31.413923       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 13:28:31.419876       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 13:28:31.419952       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 13:28:31.419995       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 13:28:31.420957       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 13:28:31.446860       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 13:28:31.456818       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	
	
	==> kube-proxy [1133dcb977e4183682b3afa9dea83c872da5be4549c270692c7aeff3d5b6d2f6] <==
	I1018 13:28:29.349389       1 server_linux.go:53] "Using iptables proxy"
	I1018 13:28:29.704648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 13:28:29.807720       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 13:28:29.807784       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 13:28:29.819743       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 13:28:29.857035       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 13:28:29.857091       1 server_linux.go:132] "Using iptables Proxier"
	I1018 13:28:29.941939       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 13:28:29.948034       1 server.go:527] "Version info" version="v1.34.1"
	I1018 13:28:29.948067       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:28:29.951129       1 config.go:200] "Starting service config controller"
	I1018 13:28:29.951150       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 13:28:29.951169       1 config.go:106] "Starting endpoint slice config controller"
	I1018 13:28:29.951183       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 13:28:29.951197       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 13:28:29.951201       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 13:28:29.952934       1 config.go:309] "Starting node config controller"
	I1018 13:28:29.952954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 13:28:29.952961       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 13:28:30.077101       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 13:28:30.077316       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 13:28:30.077373       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2afe57e755a936bddf779258179463776b140bea5c0043c7cf534a24dd203124] <==
	I1018 13:28:29.697804       1 serving.go:386] Generated self-signed cert in-memory
	I1018 13:28:31.011198       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 13:28:31.011254       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 13:28:31.019082       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 13:28:31.019174       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 13:28:31.019197       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 13:28:31.019226       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 13:28:31.048003       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:28:31.048117       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 13:28:31.048248       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:28:31.048285       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:28:31.133651       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 13:28:31.349276       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 13:28:31.349359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 13:28:25 newest-cni-977407 kubelet[728]: E1018 13:28:25.090261     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-977407\" not found" node="newest-cni-977407"
	Oct 18 13:28:25 newest-cni-977407 kubelet[728]: E1018 13:28:25.403291     728 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-977407\" not found" node="newest-cni-977407"
	Oct 18 13:28:26 newest-cni-977407 kubelet[728]: I1018 13:28:26.833799     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.120150     728 apiserver.go:52] "Watching apiserver"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.136751     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.169899     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd820b89-8782-4a68-8488-8eae7823ed4e-lib-modules\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.169973     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd820b89-8782-4a68-8488-8eae7823ed4e-xtables-lock\") pod \"kube-proxy-x4kds\" (UID: \"fd820b89-8782-4a68-8488-8eae7823ed4e\") " pod="kube-system/kube-proxy-x4kds"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.170001     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-xtables-lock\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.170018     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-lib-modules\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.170050     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/62df2833-c27f-44a7-932f-ddd5e8e4888e-cni-cfg\") pod \"kindnet-g5rjn\" (UID: \"62df2833-c27f-44a7-932f-ddd5e8e4888e\") " pod="kube-system/kindnet-g5rjn"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.219509     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-977407\" already exists" pod="kube-system/kube-controller-manager-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.219568     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.403859     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-977407\" already exists" pod="kube-system/kube-scheduler-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.403996     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.405004     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.418628     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.418858     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.418976     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.419906     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.510466     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-977407\" already exists" pod="kube-system/etcd-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: I1018 13:28:27.510508     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-977407"
	Oct 18 13:28:27 newest-cni-977407 kubelet[728]: E1018 13:28:27.612263     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-977407\" already exists" pod="kube-system/kube-apiserver-newest-cni-977407"
	Oct 18 13:28:31 newest-cni-977407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 13:28:31 newest-cni-977407 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 13:28:31 newest-cni-977407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977407 -n newest-cni-977407
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977407 -n newest-cni-977407: exit status 2 (486.256363ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-977407 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng: exit status 1 (100.202547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h2dzv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zzfsb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vm8ng" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-977407 describe pod coredns-66bc5c9577-h2dzv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zzfsb kubernetes-dashboard-855c9754f9-vm8ng: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.25s)
E1018 13:34:54.350236  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.356626  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.368118  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.389499  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.431261  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.512643  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.673993  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:54.995475  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:55.637029  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:56.919177  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:34:59.480562  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:04.602378  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (257/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.08
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.35
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.15
27 TestAddons/Setup 175.02
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 8.85
48 TestAddons/StoppedEnableDisable 12.39
49 TestCertOptions 37.43
50 TestCertExpiration 337.11
52 TestForceSystemdFlag 40.88
53 TestForceSystemdEnv 41.98
59 TestErrorSpam/setup 32.1
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 7.31
63 TestErrorSpam/unpause 4.95
64 TestErrorSpam/stop 1.51
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 51.55
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.06
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
76 TestFunctional/serial/CacheCmd/cache/add_local 1.14
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 42.22
85 TestFunctional/serial/ComponentHealth 0.14
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.55
88 TestFunctional/serial/InvalidService 4.29
90 TestFunctional/parallel/ConfigCmd 0.47
91 TestFunctional/parallel/DashboardCmd 9.76
92 TestFunctional/parallel/DryRun 0.55
93 TestFunctional/parallel/InternationalLanguage 0.2
94 TestFunctional/parallel/StatusCmd 1.04
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 26.54
102 TestFunctional/parallel/SSHCmd 0.62
103 TestFunctional/parallel/CpCmd 2.14
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 2.3
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 1.17
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
122 TestFunctional/parallel/ImageCommands/Setup 0.7
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ServiceCmd/List 0.53
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
151 TestFunctional/parallel/ProfileCmd/profile_list 0.44
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
153 TestFunctional/parallel/MountCmd/any-port 8.11
154 TestFunctional/parallel/MountCmd/specific-port 1.88
155 TestFunctional/parallel/MountCmd/VerifyCleanup 2.43
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 203.73
164 TestMultiControlPlane/serial/DeployApp 8.53
165 TestMultiControlPlane/serial/PingHostFromPods 1.55
166 TestMultiControlPlane/serial/AddWorkerNode 32.49
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
169 TestMultiControlPlane/serial/CopyFile 20.54
170 TestMultiControlPlane/serial/StopSecondaryNode 12.94
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 21.08
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 129.33
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.92
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
177 TestMultiControlPlane/serial/StopCluster 36.13
180 TestMultiControlPlane/serial/AddSecondaryNode 82.84
185 TestJSONOutput/start/Command 79.4
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 37.79
211 TestKicCustomNetwork/use_default_bridge_network 33.66
212 TestKicExistingNetwork 39.23
213 TestKicCustomSubnet 40.07
214 TestKicStaticIP 38.03
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 75.64
219 TestMountStart/serial/StartWithMountFirst 9.24
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 8.83
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.72
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 134.76
231 TestMultiNode/serial/DeployApp2Nodes 5.21
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 58.54
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.74
236 TestMultiNode/serial/CopyFile 10.64
237 TestMultiNode/serial/StopNode 2.44
238 TestMultiNode/serial/StartAfterStop 8.17
239 TestMultiNode/serial/RestartKeepsNodes 74.52
240 TestMultiNode/serial/DeleteNode 5.69
241 TestMultiNode/serial/StopMultiNode 24.02
242 TestMultiNode/serial/RestartMultiNode 52.67
243 TestMultiNode/serial/ValidateNameConflict 35.59
248 TestPreload 131.71
250 TestScheduledStopUnix 109.15
253 TestInsufficientStorage 13.34
254 TestRunningBinaryUpgrade 56.32
256 TestKubernetesUpgrade 347.62
257 TestMissingContainerUpgrade 112.56
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 51.84
261 TestNoKubernetes/serial/StartWithStopK8s 118.57
262 TestNoKubernetes/serial/Start 8.82
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
264 TestNoKubernetes/serial/ProfileList 32.67
265 TestNoKubernetes/serial/Stop 1.3
266 TestNoKubernetes/serial/StartNoArgs 7.73
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 0.89
269 TestStoppedBinaryUpgrade/Upgrade 58.08
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
279 TestPause/serial/Start 78.54
280 TestPause/serial/SecondStartNoReconfiguration 23.83
289 TestNetworkPlugins/group/false 5.08
294 TestStartStop/group/old-k8s-version/serial/FirstStart 63.82
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.45
297 TestStartStop/group/old-k8s-version/serial/Stop 12.03
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 50.12
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
305 TestStartStop/group/no-preload/serial/FirstStart 68.52
306 TestStartStop/group/no-preload/serial/DeployApp 8.33
308 TestStartStop/group/no-preload/serial/Stop 12.03
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
310 TestStartStop/group/no-preload/serial/SecondStart 56.19
312 TestStartStop/group/embed-certs/serial/FirstStart 84.81
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.18
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.5
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.2
319 TestStartStop/group/embed-certs/serial/DeployApp 10.38
321 TestStartStop/group/embed-certs/serial/Stop 12.05
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
323 TestStartStop/group/embed-certs/serial/SecondStart 52.78
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.45
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.38
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.7
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
334 TestStartStop/group/newest-cni/serial/FirstStart 40.21
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
337 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/Stop 1.37
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
343 TestStartStop/group/newest-cni/serial/SecondStart 21.25
344 TestNetworkPlugins/group/auto/Start 92.96
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
349 TestNetworkPlugins/group/kindnet/Start 87.62
350 TestNetworkPlugins/group/auto/KubeletFlags 0.31
351 TestNetworkPlugins/group/auto/NetCatPod 9.28
352 TestNetworkPlugins/group/auto/DNS 0.17
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.13
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.44
358 TestNetworkPlugins/group/calico/Start 71.15
359 TestNetworkPlugins/group/kindnet/DNS 0.21
360 TestNetworkPlugins/group/kindnet/Localhost 0.2
361 TestNetworkPlugins/group/kindnet/HairPin 0.2
362 TestNetworkPlugins/group/custom-flannel/Start 64.54
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.37
365 TestNetworkPlugins/group/calico/NetCatPod 10.26
366 TestNetworkPlugins/group/calico/DNS 0.2
367 TestNetworkPlugins/group/calico/Localhost 0.17
368 TestNetworkPlugins/group/calico/HairPin 0.15
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.3
371 TestNetworkPlugins/group/custom-flannel/DNS 0.22
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
374 TestNetworkPlugins/group/enable-default-cni/Start 86.65
375 TestNetworkPlugins/group/flannel/Start 64.74
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
379 TestNetworkPlugins/group/flannel/NetCatPod 11.4
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.46
381 TestNetworkPlugins/group/flannel/DNS 0.15
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
383 TestNetworkPlugins/group/flannel/Localhost 0.2
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
385 TestNetworkPlugins/group/flannel/HairPin 0.2
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
387 TestNetworkPlugins/group/bridge/Start 43.61
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 9.26
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (5.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-019533 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-019533 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.075095704s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 12:15:39.477507  836086 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 12:15:39.477589  836086 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-019533
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-019533: exit status 85 (100.091351ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-019533 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-019533 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:15:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:15:34.452778  836091 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:15:34.452963  836091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:34.452975  836091 out.go:374] Setting ErrFile to fd 2...
	I1018 12:15:34.452981  836091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:34.453362  836091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	W1018 12:15:34.453553  836091 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21647-834184/.minikube/config/config.json: open /home/jenkins/minikube-integration/21647-834184/.minikube/config/config.json: no such file or directory
	I1018 12:15:34.454050  836091 out.go:368] Setting JSON to true
	I1018 12:15:34.455053  836091 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":14287,"bootTime":1760775448,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:15:34.455154  836091 start.go:141] virtualization:  
	I1018 12:15:34.459424  836091 out.go:99] [download-only-019533] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1018 12:15:34.459638  836091 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 12:15:34.459764  836091 notify.go:220] Checking for updates...
	I1018 12:15:34.464762  836091 out.go:171] MINIKUBE_LOCATION=21647
	I1018 12:15:34.467741  836091 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:15:34.470645  836091 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:15:34.473979  836091 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:15:34.476930  836091 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 12:15:34.482550  836091 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 12:15:34.482925  836091 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:15:34.510246  836091 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:15:34.510358  836091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:34.565553  836091 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 12:15:34.556286882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:34.565658  836091 docker.go:318] overlay module found
	I1018 12:15:34.568759  836091 out.go:99] Using the docker driver based on user configuration
	I1018 12:15:34.568789  836091 start.go:305] selected driver: docker
	I1018 12:15:34.568796  836091 start.go:925] validating driver "docker" against <nil>
	I1018 12:15:34.568912  836091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:34.633192  836091 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 12:15:34.622890133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:34.633358  836091 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:15:34.633645  836091 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 12:15:34.633808  836091 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 12:15:34.636868  836091 out.go:171] Using Docker driver with root privileges
	I1018 12:15:34.639821  836091 cni.go:84] Creating CNI manager for ""
	I1018 12:15:34.639902  836091 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 12:15:34.639917  836091 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:15:34.640006  836091 start.go:349] cluster config:
	{Name:download-only-019533 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-019533 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:15:34.643113  836091 out.go:99] Starting "download-only-019533" primary control-plane node in "download-only-019533" cluster
	I1018 12:15:34.643146  836091 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 12:15:34.646133  836091 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:15:34.646180  836091 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 12:15:34.646348  836091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:15:34.662020  836091 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:15:34.662223  836091 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:15:34.662319  836091 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:15:34.700717  836091 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 12:15:34.700771  836091 cache.go:58] Caching tarball of preloaded images
	I1018 12:15:34.700940  836091 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 12:15:34.704280  836091 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 12:15:34.704322  836091 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1018 12:15:34.797611  836091 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1018 12:15:34.797743  836091 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 12:15:38.092415  836091 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 12:15:38.092798  836091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/download-only-019533/config.json ...
	I1018 12:15:38.092834  836091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/download-only-019533/config.json: {Name:mk08352446dcb954728c766331e261aef6cd1db6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:15:38.093010  836091 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 12:15:38.093226  836091 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21647-834184/.minikube/cache/linux/arm64/v1.28.0/kubectl
	I1018 12:15:39.452869  836091 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 12:15:39.452885  836091 cache.go:232] Successfully downloaded all kic artifacts
	
	
	* The control-plane node download-only-019533 host does not exist
	  To start a cluster, run: "minikube start -p download-only-019533"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-019533
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-794243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-794243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.348416876s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 12:15:44.293531  836086 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 12:15:44.293569  836086 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-834184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-794243
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-794243: exit status 85 (90.468497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-019533 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-019533 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ delete  │ -p download-only-019533                                                                                                                                                   │ download-only-019533 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	│ start   │ -o=json --download-only -p download-only-794243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-794243 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:15:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:15:39.990206  836296 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:15:39.990385  836296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:39.990417  836296 out.go:374] Setting ErrFile to fd 2...
	I1018 12:15:39.990437  836296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:15:39.990742  836296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:15:39.991250  836296 out.go:368] Setting JSON to true
	I1018 12:15:39.992197  836296 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":14292,"bootTime":1760775448,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:15:39.992347  836296 start.go:141] virtualization:  
	I1018 12:15:39.995832  836296 out.go:99] [download-only-794243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:15:39.996064  836296 notify.go:220] Checking for updates...
	I1018 12:15:40.018627  836296 out.go:171] MINIKUBE_LOCATION=21647
	I1018 12:15:40.025292  836296 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:15:40.028385  836296 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:15:40.031747  836296 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:15:40.035037  836296 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 12:15:40.041801  836296 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 12:15:40.042182  836296 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:15:40.064691  836296 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:15:40.064833  836296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:40.125869  836296 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-18 12:15:40.116140694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:40.125990  836296 docker.go:318] overlay module found
	I1018 12:15:40.129030  836296 out.go:99] Using the docker driver based on user configuration
	I1018 12:15:40.129078  836296 start.go:305] selected driver: docker
	I1018 12:15:40.129092  836296 start.go:925] validating driver "docker" against <nil>
	I1018 12:15:40.129203  836296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:15:40.192731  836296 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-18 12:15:40.181192841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:15:40.192899  836296 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:15:40.193181  836296 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 12:15:40.193361  836296 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 12:15:40.196677  836296 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-794243 host does not exist
	  To start a cluster, run: "minikube start -p download-only-794243"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-794243
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 12:15:45.723424  836086 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-959514 --alsologtostderr --binary-mirror http://127.0.0.1:36463 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-959514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-959514
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-206214
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-206214: exit status 85 (114.805484ms)

                                                
                                                
-- stdout --
	* Profile "addons-206214" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-206214"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.15s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-206214
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-206214: exit status 85 (151.984105ms)

                                                
                                                
-- stdout --
	* Profile "addons-206214" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-206214"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.15s)

                                                
                                    
x
+
TestAddons/Setup (175.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-206214 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-206214 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m55.017342615s)
--- PASS: TestAddons/Setup (175.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-206214 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-206214 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-206214 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-206214 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d1612345-fd53-40ba-a2c2-e00c4033c841] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d1612345-fd53-40ba-a2c2-e00c4033c841] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003854363s
addons_test.go:694: (dbg) Run:  kubectl --context addons-206214 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-206214 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-206214 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-206214 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-206214
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-206214: (12.109220527s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-206214
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-206214
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-206214
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (37.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-179041 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.527939757s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-179041 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-179041 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-179041 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-179041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-179041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-179041: (2.102473191s)
--- PASS: TestCertOptions (37.43s)

                                                
                                    
x
+
TestCertExpiration (337.11s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1018 13:18:42.457292  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-076887 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.373502238s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-076887 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m52.239890291s)
helpers_test.go:175: Cleaning up "cert-expiration-076887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-076887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-076887: (3.487151048s)
--- PASS: TestCertExpiration (337.11s)

                                                
                                    
x
+
TestForceSystemdFlag (40.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-882807 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-882807 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.612347173s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-882807 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-882807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-882807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-882807: (2.859082221s)
--- PASS: TestForceSystemdFlag (40.88s)

                                                
                                    
x
+
TestForceSystemdEnv (41.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-914730 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-914730 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.255530043s)
helpers_test.go:175: Cleaning up "force-systemd-env-914730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-914730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-914730: (2.725482041s)
--- PASS: TestForceSystemdEnv (41.98s)

                                                
                                    
x
+
TestErrorSpam/setup (32.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-541383 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-541383 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-541383 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-541383 --driver=docker  --container-runtime=crio: (32.098035969s)
--- PASS: TestErrorSpam/setup (32.10s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (7.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause: exit status 80 (2.608036297s)

                                                
                                                
-- stdout --
	* Pausing node nospam-541383 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:23:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause: exit status 80 (2.254017198s)

                                                
                                                
-- stdout --
	* Pausing node nospam-541383 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:23:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause: exit status 80 (2.443343042s)

                                                
                                                
-- stdout --
	* Pausing node nospam-541383 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:23:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (4.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause: exit status 80 (1.605584328s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-541383 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:23:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause: exit status 80 (1.547775196s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-541383 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:23:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause: exit status 80 (1.799582798s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-541383 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T12:23:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (4.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 stop: (1.31514135s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-541383 --log_dir /tmp/nospam-541383 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21647-834184/.minikube/files/etc/test/nested/copy/836086/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767781 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1018 12:23:42.460582  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:42.467338  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:42.478654  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:42.500014  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:42.541371  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:42.622747  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:42.784181  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:43.105785  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:43.747781  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:45.029340  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:47.591349  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:23:52.712628  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:24:02.954406  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-767781 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.546974067s)
--- PASS: TestFunctional/serial/StartWithProxy (51.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 12:24:10.291919  836086 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767781 --alsologtostderr -v=8
E1018 12:24:23.435776  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-767781 --alsologtostderr -v=8: (27.052431367s)
functional_test.go:678: soft start took 27.058890826s for "functional-767781" cluster.
I1018 12:24:37.345094  836086 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-767781 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 cache add registry.k8s.io/pause:3.1: (1.179950613s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 cache add registry.k8s.io/pause:3.3: (1.147298822s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 cache add registry.k8s.io/pause:latest: (1.163015159s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-767781 /tmp/TestFunctionalserialCacheCmdcacheadd_local2560606640/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cache add minikube-local-cache-test:functional-767781
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cache delete minikube-local-cache-test:functional-767781
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-767781
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.656566ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 kubectl -- --context functional-767781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-767781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 12:25:04.397117  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-767781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.222556658s)
functional_test.go:776: restart took 42.222652224s for "functional-767781" cluster.
I1018 12:25:27.024937  836086 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-767781 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 logs: (1.481790019s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 logs --file /tmp/TestFunctionalserialLogsFileCmd3983136352/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 logs --file /tmp/TestFunctionalserialLogsFileCmd3983136352/001/logs.txt: (1.54953728s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-767781 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-767781
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-767781: exit status 115 (388.846056ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32467 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-767781 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 config get cpus: exit status 14 (69.11939ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 config get cpus: exit status 14 (75.176142ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-767781 --alsologtostderr -v=1]
2025/10/18 12:36:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-767781 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 864006: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (318.908722ms)

                                                
                                                
-- stdout --
	* [functional-767781] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:36:00.516514  863704 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:36:00.516809  863704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:36:00.516833  863704 out.go:374] Setting ErrFile to fd 2...
	I1018 12:36:00.516840  863704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:36:00.517359  863704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:36:00.518036  863704 out.go:368] Setting JSON to false
	I1018 12:36:00.519192  863704 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15513,"bootTime":1760775448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:36:00.519375  863704 start.go:141] virtualization:  
	I1018 12:36:00.523455  863704 out.go:179] * [functional-767781] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:36:00.527717  863704 notify.go:220] Checking for updates...
	I1018 12:36:00.531732  863704 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:36:00.535579  863704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:36:00.539377  863704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:36:00.542443  863704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:36:00.549035  863704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:36:00.576279  863704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:36:00.580016  863704 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:36:00.580673  863704 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:36:00.617263  863704 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:36:00.617449  863704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:36:00.683115  863704 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:36:00.672838478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:36:00.683235  863704 docker.go:318] overlay module found
	I1018 12:36:00.686372  863704 out.go:179] * Using the docker driver based on existing profile
	I1018 12:36:00.689333  863704 start.go:305] selected driver: docker
	I1018 12:36:00.689358  863704 start.go:925] validating driver "docker" against &{Name:functional-767781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-767781 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:36:00.689482  863704 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:36:00.693033  863704 out.go:203] 
	W1018 12:36:00.696073  863704 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 12:36:00.699032  863704 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-767781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (203.663539ms)

                                                
                                                
-- stdout --
	* [functional-767781] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:36:00.991535  863824 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:36:00.991697  863824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:36:00.991709  863824 out.go:374] Setting ErrFile to fd 2...
	I1018 12:36:00.991714  863824 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:36:00.992784  863824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:36:00.993255  863824 out.go:368] Setting JSON to false
	I1018 12:36:00.994128  863824 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15513,"bootTime":1760775448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 12:36:00.994198  863824 start.go:141] virtualization:  
	I1018 12:36:00.997375  863824 out.go:179] * [functional-767781] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 12:36:01.000294  863824 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:36:01.000443  863824 notify.go:220] Checking for updates...
	I1018 12:36:01.010507  863824 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:36:01.013468  863824 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 12:36:01.016346  863824 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 12:36:01.019962  863824 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:36:01.022984  863824 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:36:01.026380  863824 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:36:01.026930  863824 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:36:01.048744  863824 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:36:01.048978  863824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:36:01.119090  863824 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:36:01.109298698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:36:01.119196  863824 docker.go:318] overlay module found
	I1018 12:36:01.122318  863824 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 12:36:01.125174  863824 start.go:305] selected driver: docker
	I1018 12:36:01.125203  863824 start.go:925] validating driver "docker" against &{Name:functional-767781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-767781 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:36:01.125310  863824 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:36:01.128801  863824 out.go:203] 
	W1018 12:36:01.131773  863824 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 12:36:01.134629  863824 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e52f81c5-0ad7-46a3-be5e-b35880362a07] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003384037s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-767781 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-767781 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-767781 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-767781 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [59f4a8fb-7463-4a55-8a21-c0d96c265cdf] Pending
helpers_test.go:352: "sp-pod" [59f4a8fb-7463-4a55-8a21-c0d96c265cdf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [59f4a8fb-7463-4a55-8a21-c0d96c265cdf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.002978352s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-767781 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-767781 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-767781 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a8b2cd0-e847-42c7-9dbc-1df5b945a485] Pending
helpers_test.go:352: "sp-pod" [2a8b2cd0-e847-42c7-9dbc-1df5b945a485] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003505343s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-767781 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.54s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh -n functional-767781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cp functional-767781:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd515542278/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh -n functional-767781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh -n functional-767781 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/836086/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /etc/test/nested/copy/836086/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/836086.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /etc/ssl/certs/836086.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/836086.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /usr/share/ca-certificates/836086.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8360862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /etc/ssl/certs/8360862.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8360862.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /usr/share/ca-certificates/8360862.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-767781 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh "sudo systemctl is-active docker": exit status 1 (355.016869ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh "sudo systemctl is-active containerd": exit status 1 (358.280599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 version -o=json --components: (1.171046355s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767781 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767781 image ls --format short --alsologtostderr:
I1018 12:36:12.666255  864385 out.go:360] Setting OutFile to fd 1 ...
I1018 12:36:12.666495  864385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:12.666510  864385 out.go:374] Setting ErrFile to fd 2...
I1018 12:36:12.666520  864385 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:12.666858  864385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
I1018 12:36:12.667599  864385 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:12.667805  864385 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:12.668356  864385 cli_runner.go:164] Run: docker container inspect functional-767781 --format={{.State.Status}}
I1018 12:36:12.686556  864385 ssh_runner.go:195] Run: systemctl --version
I1018 12:36:12.686610  864385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767781
I1018 12:36:12.708872  864385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/functional-767781/id_rsa Username:docker}
I1018 12:36:12.814485  864385 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767781 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ localhost/my-image                      │ functional-767781  │ 946d3886167c1 │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767781 image ls --format table --alsologtostderr:
I1018 12:36:17.286267  864870 out.go:360] Setting OutFile to fd 1 ...
I1018 12:36:17.286436  864870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:17.286466  864870 out.go:374] Setting ErrFile to fd 2...
I1018 12:36:17.286487  864870 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:17.286770  864870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
I1018 12:36:17.287436  864870 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:17.287632  864870 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:17.288210  864870 cli_runner.go:164] Run: docker container inspect functional-767781 --format={{.State.Status}}
I1018 12:36:17.307357  864870 ssh_runner.go:195] Run: systemctl --version
I1018 12:36:17.307523  864870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767781
I1018 12:36:17.327001  864870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/functional-767781/id_rsa Username:docker}
I1018 12:36:17.430205  864870 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767781 image ls --format json --alsologtostderr:
[{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1e
a4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/me
trics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8cb2091f603e75187e2f62
26c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["dock
er.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb619
99d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767781 image ls --format json --alsologtostderr:
I1018 12:36:12.903146  864421 out.go:360] Setting OutFile to fd 1 ...
I1018 12:36:12.903313  864421 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:12.903324  864421 out.go:374] Setting ErrFile to fd 2...
I1018 12:36:12.903329  864421 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:12.903595  864421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
I1018 12:36:12.904248  864421 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:12.904373  864421 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:12.904826  864421 cli_runner.go:164] Run: docker container inspect functional-767781 --format={{.State.Status}}
I1018 12:36:12.924815  864421 ssh_runner.go:195] Run: systemctl --version
I1018 12:36:12.924880  864421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767781
I1018 12:36:12.941750  864421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/functional-767781/id_rsa Username:docker}
I1018 12:36:13.050469  864421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767781 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 4ee98994ddd73c5c96f3a273e8cde7762ab07e81ed0b179fb3905ea9d442e8f7
repoDigests:
- docker.io/library/e4c91a7992a0d3025c0535db1863960d3775c4697847b090f3400640960ef1a4-tmp@sha256:a935f42a1568739ed2bbb3bcf161992eb07496f568f830d22ae06ea680edfe08
repoTags: []
size: "1638179"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 946d3886167c18ff689526eb923b58e441485e300cb2c142bf977a1b39b354d0
repoDigests:
- localhost/my-image@sha256:1eadefe8d63fa6f07f10ea9c4d4fcb407b2e785ad8860d47c9adbc6a73234c4a
repoTags:
- localhost/my-image:functional-767781
size: "1640791"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1634527"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767781 image ls --format yaml --alsologtostderr:
I1018 12:36:17.039471  864833 out.go:360] Setting OutFile to fd 1 ...
I1018 12:36:17.039638  864833 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:17.039702  864833 out.go:374] Setting ErrFile to fd 2...
I1018 12:36:17.039724  864833 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:17.039996  864833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
I1018 12:36:17.040631  864833 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:17.040798  864833 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:17.041296  864833 cli_runner.go:164] Run: docker container inspect functional-767781 --format={{.State.Status}}
I1018 12:36:17.060005  864833 ssh_runner.go:195] Run: systemctl --version
I1018 12:36:17.060060  864833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767781
I1018 12:36:17.078604  864833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/functional-767781/id_rsa Username:docker}
I1018 12:36:17.182282  864833 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh pgrep buildkitd: exit status 1 (289.846829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image build -t localhost/my-image:functional-767781 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-767781 image build -t localhost/my-image:functional-767781 testdata/build --alsologtostderr: (3.359932499s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-767781 image build -t localhost/my-image:functional-767781 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4ee98994ddd
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-767781
--> 946d3886167
Successfully tagged localhost/my-image:functional-767781
946d3886167c18ff689526eb923b58e441485e300cb2c142bf977a1b39b354d0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-767781 image build -t localhost/my-image:functional-767781 testdata/build --alsologtostderr:
I1018 12:36:13.431798  864529 out.go:360] Setting OutFile to fd 1 ...
I1018 12:36:13.432569  864529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:13.432617  864529 out.go:374] Setting ErrFile to fd 2...
I1018 12:36:13.432640  864529 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:36:13.432925  864529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
I1018 12:36:13.433610  864529 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:13.434377  864529 config.go:182] Loaded profile config "functional-767781": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 12:36:13.434908  864529 cli_runner.go:164] Run: docker container inspect functional-767781 --format={{.State.Status}}
I1018 12:36:13.453199  864529 ssh_runner.go:195] Run: systemctl --version
I1018 12:36:13.453270  864529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767781
I1018 12:36:13.474307  864529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/functional-767781/id_rsa Username:docker}
I1018 12:36:13.578764  864529 build_images.go:161] Building image from path: /tmp/build.2319567654.tar
I1018 12:36:13.578936  864529 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 12:36:13.587620  864529 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2319567654.tar
I1018 12:36:13.591675  864529 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2319567654.tar: stat -c "%s %y" /var/lib/minikube/build/build.2319567654.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2319567654.tar': No such file or directory
I1018 12:36:13.591706  864529 ssh_runner.go:362] scp /tmp/build.2319567654.tar --> /var/lib/minikube/build/build.2319567654.tar (3072 bytes)
I1018 12:36:13.612277  864529 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2319567654
I1018 12:36:13.620217  864529 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2319567654 -xf /var/lib/minikube/build/build.2319567654.tar
I1018 12:36:13.628888  864529 crio.go:315] Building image: /var/lib/minikube/build/build.2319567654
I1018 12:36:13.629018  864529 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-767781 /var/lib/minikube/build/build.2319567654 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1018 12:36:16.712894  864529 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-767781 /var/lib/minikube/build/build.2319567654 --cgroup-manager=cgroupfs: (3.083837618s)
I1018 12:36:16.712966  864529 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2319567654
I1018 12:36:16.720954  864529 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2319567654.tar
I1018 12:36:16.728993  864529 build_images.go:217] Built localhost/my-image:functional-767781 from /tmp/build.2319567654.tar
I1018 12:36:16.729028  864529 build_images.go:133] succeeded building to: functional-767781
I1018 12:36:16.729034  864529 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-767781
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image rm kicbase/echo-server:functional-767781 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-767781 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-767781 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-767781 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 860099: os: process already finished
helpers_test.go:519: unable to terminate pid 859973: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-767781 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-767781 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-767781 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6dfe21f3-cbf5-4664-abf7-04d983902d16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6dfe21f3-cbf5-4664-abf7-04d983902d16] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004215565s
I1018 12:25:52.194659  836086 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-767781 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.11.80 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-767781 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 service list -o json
functional_test.go:1504: Took "514.76853ms" to run "out/minikube-linux-arm64 -p functional-767781 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "372.603023ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "63.962807ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "357.709515ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.065375ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdany-port38407582/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760790947971941458" to /tmp/TestFunctionalparallelMountCmdany-port38407582/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760790947971941458" to /tmp/TestFunctionalparallelMountCmdany-port38407582/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760790947971941458" to /tmp/TestFunctionalparallelMountCmdany-port38407582/001/test-1760790947971941458
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.793289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:35:48.349878  836086 retry.go:31] will retry after 585.586492ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 12:35 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 12:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 12:35 test-1760790947971941458
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh cat /mount-9p/test-1760790947971941458
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-767781 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [dad107e0-75a5-4474-a2d5-f54992d76c11] Pending
helpers_test.go:352: "busybox-mount" [dad107e0-75a5-4474-a2d5-f54992d76c11] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [dad107e0-75a5-4474-a2d5-f54992d76c11] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [dad107e0-75a5-4474-a2d5-f54992d76c11] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003564242s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-767781 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdany-port38407582/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdspecific-port2753340514/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.484467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:35:56.431230  836086 retry.go:31] will retry after 463.622468ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdspecific-port2753340514/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh "sudo umount -f /mount-9p": exit status 1 (278.974313ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-767781 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdspecific-port2753340514/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T" /mount1: exit status 1 (610.591394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:35:58.573274  836086 retry.go:31] will retry after 644.931447ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-767781 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-767781 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-767781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup57488994/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-767781
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-767781
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-767781
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 12:38:42.457526  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m22.844785864s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 kubectl -- rollout status deployment/busybox: (5.584389659s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-f89wj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-hrdj5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-v452k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-f89wj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-hrdj5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-v452k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-f89wj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-hrdj5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-v452k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-f89wj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-f89wj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-hrdj5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-hrdj5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-v452k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 kubectl -- exec busybox-7b57f96db7-v452k -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node add --alsologtostderr -v 5
E1018 12:40:05.524893  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 node add --alsologtostderr -v 5: (31.401361602s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5: (1.087169326s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-904693 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.085564852s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 status --output json --alsologtostderr -v 5: (1.078546477s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp testdata/cp-test.txt ha-904693:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693:/home/docker/cp-test.txt ha-904693-m02:/home/docker/cp-test_ha-904693_ha-904693-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test_ha-904693_ha-904693-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693:/home/docker/cp-test.txt ha-904693-m03:/home/docker/cp-test_ha-904693_ha-904693-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test_ha-904693_ha-904693-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693:/home/docker/cp-test.txt ha-904693-m04:/home/docker/cp-test_ha-904693_ha-904693-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test_ha-904693_ha-904693-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp testdata/cp-test.txt ha-904693-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693-m02.txt
E1018 12:40:39.669978  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:39.676613  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:39.688103  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:39.711928  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:39.753312  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test.txt"
E1018 12:40:39.835284  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:40:39.998239  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m02:/home/docker/cp-test.txt ha-904693:/home/docker/cp-test_ha-904693-m02_ha-904693.txt
E1018 12:40:40.321763  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test.txt"
E1018 12:40:40.963636  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test_ha-904693-m02_ha-904693.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m02:/home/docker/cp-test.txt ha-904693-m03:/home/docker/cp-test_ha-904693-m02_ha-904693-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test.txt"
E1018 12:40:42.245159  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test_ha-904693-m02_ha-904693-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m02:/home/docker/cp-test.txt ha-904693-m04:/home/docker/cp-test_ha-904693-m02_ha-904693-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test_ha-904693-m02_ha-904693-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp testdata/cp-test.txt ha-904693-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test.txt"
E1018 12:40:44.806965  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m03:/home/docker/cp-test.txt ha-904693:/home/docker/cp-test_ha-904693-m03_ha-904693.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test_ha-904693-m03_ha-904693.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m03:/home/docker/cp-test.txt ha-904693-m02:/home/docker/cp-test_ha-904693-m03_ha-904693-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test_ha-904693-m03_ha-904693-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m03:/home/docker/cp-test.txt ha-904693-m04:/home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test_ha-904693-m03_ha-904693-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp testdata/cp-test.txt ha-904693-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2476059903/001/cp-test_ha-904693-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test.txt"
E1018 12:40:49.928874  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693:/home/docker/cp-test_ha-904693-m04_ha-904693.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693 "sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m02:/home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m02 "sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 cp ha-904693-m04:/home/docker/cp-test.txt ha-904693-m03:/home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 ssh -n ha-904693-m03 "sudo cat /home/docker/cp-test_ha-904693-m04_ha-904693-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node stop m02 --alsologtostderr -v 5
E1018 12:41:00.173096  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 node stop m02 --alsologtostderr -v 5: (12.130926991s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5: exit status 7 (803.634537ms)

                                                
                                                
-- stdout --
	ha-904693
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-904693-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-904693-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-904693-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:41:05.764467  880077 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:41:05.764646  880077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:41:05.764653  880077 out.go:374] Setting ErrFile to fd 2...
	I1018 12:41:05.764658  880077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:41:05.764924  880077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:41:05.765106  880077 out.go:368] Setting JSON to false
	I1018 12:41:05.765188  880077 mustload.go:65] Loading cluster: ha-904693
	I1018 12:41:05.765570  880077 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:41:05.765584  880077 status.go:174] checking status of ha-904693 ...
	I1018 12:41:05.766149  880077 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:41:05.767040  880077 notify.go:220] Checking for updates...
	I1018 12:41:05.785674  880077 status.go:371] ha-904693 host status = "Running" (err=<nil>)
	I1018 12:41:05.785701  880077 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:41:05.786009  880077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693
	I1018 12:41:05.803373  880077 host.go:66] Checking if "ha-904693" exists ...
	I1018 12:41:05.803880  880077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:41:05.803943  880077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693
	I1018 12:41:05.823213  880077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33892 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693/id_rsa Username:docker}
	I1018 12:41:05.929435  880077 ssh_runner.go:195] Run: systemctl --version
	I1018 12:41:05.936103  880077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:41:05.949371  880077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:41:06.025399  880077 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-18 12:41:06.000707182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:41:06.025979  880077 kubeconfig.go:125] found "ha-904693" server: "https://192.168.49.254:8443"
	I1018 12:41:06.026026  880077 api_server.go:166] Checking apiserver status ...
	I1018 12:41:06.026078  880077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:41:06.044348  880077 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	I1018 12:41:06.054560  880077 api_server.go:182] apiserver freezer: "11:freezer:/docker/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/crio/crio-8e42c6e1ca9940ccceeeb1be33a3efa54b23a3a0d77f91ddbf4f05e15e4b1f17"
	I1018 12:41:06.054640  880077 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e9432db50a50daafa57d01c7173672696624675fed8d805425891333a139e3e/crio/crio-8e42c6e1ca9940ccceeeb1be33a3efa54b23a3a0d77f91ddbf4f05e15e4b1f17/freezer.state
	I1018 12:41:06.063228  880077 api_server.go:204] freezer state: "THAWED"
	I1018 12:41:06.063254  880077 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 12:41:06.074171  880077 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 12:41:06.074202  880077 status.go:463] ha-904693 apiserver status = Running (err=<nil>)
	I1018 12:41:06.074214  880077 status.go:176] ha-904693 status: &{Name:ha-904693 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:41:06.074232  880077 status.go:174] checking status of ha-904693-m02 ...
	I1018 12:41:06.074576  880077 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:41:06.093213  880077 status.go:371] ha-904693-m02 host status = "Stopped" (err=<nil>)
	I1018 12:41:06.093240  880077 status.go:384] host is not running, skipping remaining checks
	I1018 12:41:06.093247  880077 status.go:176] ha-904693-m02 status: &{Name:ha-904693-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:41:06.093268  880077 status.go:174] checking status of ha-904693-m03 ...
	I1018 12:41:06.093587  880077 cli_runner.go:164] Run: docker container inspect ha-904693-m03 --format={{.State.Status}}
	I1018 12:41:06.117784  880077 status.go:371] ha-904693-m03 host status = "Running" (err=<nil>)
	I1018 12:41:06.117810  880077 host.go:66] Checking if "ha-904693-m03" exists ...
	I1018 12:41:06.118133  880077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m03
	I1018 12:41:06.137188  880077 host.go:66] Checking if "ha-904693-m03" exists ...
	I1018 12:41:06.137507  880077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:41:06.137553  880077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m03
	I1018 12:41:06.156398  880077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m03/id_rsa Username:docker}
	I1018 12:41:06.266548  880077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:41:06.280028  880077 kubeconfig.go:125] found "ha-904693" server: "https://192.168.49.254:8443"
	I1018 12:41:06.280056  880077 api_server.go:166] Checking apiserver status ...
	I1018 12:41:06.280105  880077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:41:06.291343  880077 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1191/cgroup
	I1018 12:41:06.299703  880077 api_server.go:182] apiserver freezer: "11:freezer:/docker/4e4d437d6333d8381c175ffca809a03fa516fb9e5ba2f1bc41e755b0ecac3733/crio/crio-5b0f0caa28111664d3cc2bb271a0399779001ce7bc035ecd7703969969ec98f1"
	I1018 12:41:06.299786  880077 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4e4d437d6333d8381c175ffca809a03fa516fb9e5ba2f1bc41e755b0ecac3733/crio/crio-5b0f0caa28111664d3cc2bb271a0399779001ce7bc035ecd7703969969ec98f1/freezer.state
	I1018 12:41:06.307791  880077 api_server.go:204] freezer state: "THAWED"
	I1018 12:41:06.307823  880077 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 12:41:06.316164  880077 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 12:41:06.316194  880077 status.go:463] ha-904693-m03 apiserver status = Running (err=<nil>)
	I1018 12:41:06.316204  880077 status.go:176] ha-904693-m03 status: &{Name:ha-904693-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:41:06.316222  880077 status.go:174] checking status of ha-904693-m04 ...
	I1018 12:41:06.316533  880077 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:41:06.334223  880077 status.go:371] ha-904693-m04 host status = "Running" (err=<nil>)
	I1018 12:41:06.334257  880077 host.go:66] Checking if "ha-904693-m04" exists ...
	I1018 12:41:06.334543  880077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-904693-m04
	I1018 12:41:06.360032  880077 host.go:66] Checking if "ha-904693-m04" exists ...
	I1018 12:41:06.360360  880077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:41:06.360404  880077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-904693-m04
	I1018 12:41:06.380964  880077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/ha-904693-m04/id_rsa Username:docker}
	I1018 12:41:06.489016  880077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:41:06.513831  880077 status.go:176] ha-904693-m04 status: &{Name:ha-904693-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node start m02 --alsologtostderr -v 5
E1018 12:41:20.660287  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 node start m02 --alsologtostderr -v 5: (19.802969457s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5: (1.173833257s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.075690397s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (129.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 stop --alsologtostderr -v 5: (27.577938702s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 start --wait true --alsologtostderr -v 5
E1018 12:42:01.622146  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:23.544814  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 start --wait true --alsologtostderr -v 5: (1m41.563397105s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (129.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node delete m03 --alsologtostderr -v 5
E1018 12:43:42.457691  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 node delete m03 --alsologtostderr -v 5: (8.911450544s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 stop --alsologtostderr -v 5: (36.005924183s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5: exit status 7 (120.740463ms)

                                                
                                                
-- stdout --
	ha-904693
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-904693-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-904693-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:44:25.593413  892094 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:44:25.593558  892094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.593586  892094 out.go:374] Setting ErrFile to fd 2...
	I1018 12:44:25.593603  892094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:44:25.593902  892094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 12:44:25.594122  892094 out.go:368] Setting JSON to false
	I1018 12:44:25.594173  892094 mustload.go:65] Loading cluster: ha-904693
	I1018 12:44:25.594235  892094 notify.go:220] Checking for updates...
	I1018 12:44:25.595519  892094 config.go:182] Loaded profile config "ha-904693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 12:44:25.595673  892094 status.go:174] checking status of ha-904693 ...
	I1018 12:44:25.597665  892094 cli_runner.go:164] Run: docker container inspect ha-904693 --format={{.State.Status}}
	I1018 12:44:25.616907  892094 status.go:371] ha-904693 host status = "Stopped" (err=<nil>)
	I1018 12:44:25.616930  892094 status.go:384] host is not running, skipping remaining checks
	I1018 12:44:25.616937  892094 status.go:176] ha-904693 status: &{Name:ha-904693 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:44:25.616968  892094 status.go:174] checking status of ha-904693-m02 ...
	I1018 12:44:25.617330  892094 cli_runner.go:164] Run: docker container inspect ha-904693-m02 --format={{.State.Status}}
	I1018 12:44:25.646469  892094 status.go:371] ha-904693-m02 host status = "Stopped" (err=<nil>)
	I1018 12:44:25.646496  892094 status.go:384] host is not running, skipping remaining checks
	I1018 12:44:25.646502  892094 status.go:176] ha-904693-m02 status: &{Name:ha-904693-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:44:25.646520  892094 status.go:174] checking status of ha-904693-m04 ...
	I1018 12:44:25.646805  892094 cli_runner.go:164] Run: docker container inspect ha-904693-m04 --format={{.State.Status}}
	I1018 12:44:25.663714  892094 status.go:371] ha-904693-m04 host status = "Stopped" (err=<nil>)
	I1018 12:44:25.663740  892094 status.go:384] host is not running, skipping remaining checks
	I1018 12:44:25.663748  892094 status.go:176] ha-904693-m04 status: &{Name:ha-904693-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 node add --control-plane --alsologtostderr -v 5: (1m21.715190575s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-904693 status --alsologtostderr -v 5: (1.125675051s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-898560 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1018 12:53:42.457623  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-898560 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.392600509s)
--- PASS: TestJSONOutput/start/Command (79.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-898560 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-898560 --output=json --user=testUser: (5.833971394s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-936313 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-936313 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.853949ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56e25b76-6d14-45b1-ae35-f0d4d5d3cc81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-936313] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2123d3de-7272-4be2-9906-25dca0d74949","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"48503a82-78b3-441f-89ea-c8699366b76b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d586c89f-1f8a-4473-85ef-a5cb09438ab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig"}}
	{"specversion":"1.0","id":"ec4b69b8-1f1c-4b43-9b0a-f81c2468dd81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube"}}
	{"specversion":"1.0","id":"e28bdd7e-ff76-4305-96e7-616cac123c9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4008402d-1046-491c-9e6d-22717118c671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"27e77b25-367c-40b1-9906-4223bbb483ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-936313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-936313
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-761619 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-761619 --network=: (35.50158235s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-761619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-761619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-761619: (2.261886856s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-612372 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-612372 --network=bridge: (31.499823471s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-612372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-612372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-612372: (2.136100142s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.66s)

                                                
                                    
x
+
TestKicExistingNetwork (39.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 12:55:27.531340  836086 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 12:55:27.545841  836086 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 12:55:27.545931  836086 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 12:55:27.545949  836086 cli_runner.go:164] Run: docker network inspect existing-network
W1018 12:55:27.564215  836086 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 12:55:27.564249  836086 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 12:55:27.564265  836086 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 12:55:27.564382  836086 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 12:55:27.581630  836086 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee94edf185e5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:58:5f:a6:c3:9f} reservation:<nil>}
I1018 12:55:27.581972  836086 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ed260}
I1018 12:55:27.581996  836086 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 12:55:27.582047  836086 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 12:55:27.647467  836086 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-491621 --network=existing-network
E1018 12:55:39.671833  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-491621 --network=existing-network: (36.949966642s)
helpers_test.go:175: Cleaning up "existing-network-491621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-491621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-491621: (2.131796122s)
I1018 12:56:06.747339  836086 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (39.23s)

                                                
                                    
x
+
TestKicCustomSubnet (40.07s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-760284 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-760284 --subnet=192.168.60.0/24: (37.744311997s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-760284 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-760284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-760284
E1018 12:56:45.526293  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-760284: (2.303877797s)
--- PASS: TestKicCustomSubnet (40.07s)

                                                
                                    
x
+
TestKicStaticIP (38.03s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-180562 --static-ip=192.168.200.200
E1018 12:57:02.750766  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-180562 --static-ip=192.168.200.200: (35.689443268s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-180562 ip
helpers_test.go:175: Cleaning up "static-ip-180562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-180562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-180562: (2.177997073s)
--- PASS: TestKicStaticIP (38.03s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-625723 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-625723 --driver=docker  --container-runtime=crio: (35.881264462s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-628398 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-628398 --driver=docker  --container-runtime=crio: (33.949626309s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-625723
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-628398
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-628398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-628398
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-628398: (2.159772194s)
helpers_test.go:175: Cleaning up "first-625723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-625723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-625723: (2.13731881s)
--- PASS: TestMinikubeProfile (75.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-173990 --memory=3072 --mount-string /tmp/TestMountStartserial3992166263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1018 12:58:42.457560  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-173990 --memory=3072 --mount-string /tmp/TestMountStartserial3992166263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.24310226s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-173990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-175968 --memory=3072 --mount-string /tmp/TestMountStartserial3992166263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-175968 --memory=3072 --mount-string /tmp/TestMountStartserial3992166263/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.833130677s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-175968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-173990 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-173990 --alsologtostderr -v=5: (1.710019754s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-175968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-175968
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-175968: (1.291154s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-175968
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-175968: (7.713674593s)
--- PASS: TestMountStart/serial/RestartStopped (8.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-175968 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-647331 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 13:00:39.670271  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-647331 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m14.219924587s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-647331 -- rollout status deployment/busybox: (3.317894818s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-2f8g8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-vr5xc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-2f8g8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-vr5xc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-2f8g8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-vr5xc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-2f8g8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-2f8g8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-vr5xc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-647331 -- exec busybox-7b57f96db7-vr5xc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-647331 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-647331 -v=5 --alsologtostderr: (57.807564393s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-647331 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp testdata/cp-test.txt multinode-647331:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3593661379/001/cp-test_multinode-647331.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331:/home/docker/cp-test.txt multinode-647331-m02:/home/docker/cp-test_multinode-647331_multinode-647331-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m02 "sudo cat /home/docker/cp-test_multinode-647331_multinode-647331-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331:/home/docker/cp-test.txt multinode-647331-m03:/home/docker/cp-test_multinode-647331_multinode-647331-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m03 "sudo cat /home/docker/cp-test_multinode-647331_multinode-647331-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp testdata/cp-test.txt multinode-647331-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3593661379/001/cp-test_multinode-647331-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331-m02:/home/docker/cp-test.txt multinode-647331:/home/docker/cp-test_multinode-647331-m02_multinode-647331.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331 "sudo cat /home/docker/cp-test_multinode-647331-m02_multinode-647331.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331-m02:/home/docker/cp-test.txt multinode-647331-m03:/home/docker/cp-test_multinode-647331-m02_multinode-647331-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m03 "sudo cat /home/docker/cp-test_multinode-647331-m02_multinode-647331-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp testdata/cp-test.txt multinode-647331-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3593661379/001/cp-test_multinode-647331-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331-m03:/home/docker/cp-test.txt multinode-647331:/home/docker/cp-test_multinode-647331-m03_multinode-647331.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331 "sudo cat /home/docker/cp-test_multinode-647331-m03_multinode-647331.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 cp multinode-647331-m03:/home/docker/cp-test.txt multinode-647331-m02:/home/docker/cp-test_multinode-647331-m03_multinode-647331-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 ssh -n multinode-647331-m02 "sudo cat /home/docker/cp-test_multinode-647331-m03_multinode-647331-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-647331 node stop m03: (1.32875596s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-647331 status: exit status 7 (540.05892ms)

                                                
                                                
-- stdout --
	multinode-647331
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-647331-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-647331-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr: exit status 7 (566.728309ms)

                                                
                                                
-- stdout --
	multinode-647331
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-647331-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-647331-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:02:46.170884  943997 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:02:46.171279  943997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:02:46.171306  943997 out.go:374] Setting ErrFile to fd 2...
	I1018 13:02:46.171325  943997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:02:46.171620  943997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:02:46.171881  943997 out.go:368] Setting JSON to false
	I1018 13:02:46.171941  943997 mustload.go:65] Loading cluster: multinode-647331
	I1018 13:02:46.172357  943997 config.go:182] Loaded profile config "multinode-647331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:02:46.172394  943997 status.go:174] checking status of multinode-647331 ...
	I1018 13:02:46.172957  943997 cli_runner.go:164] Run: docker container inspect multinode-647331 --format={{.State.Status}}
	I1018 13:02:46.173328  943997 notify.go:220] Checking for updates...
	I1018 13:02:46.192934  943997 status.go:371] multinode-647331 host status = "Running" (err=<nil>)
	I1018 13:02:46.192962  943997 host.go:66] Checking if "multinode-647331" exists ...
	I1018 13:02:46.193369  943997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-647331
	I1018 13:02:46.217372  943997 host.go:66] Checking if "multinode-647331" exists ...
	I1018 13:02:46.218832  943997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:02:46.218922  943997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-647331
	I1018 13:02:46.239395  943997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34012 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/multinode-647331/id_rsa Username:docker}
	I1018 13:02:46.349474  943997 ssh_runner.go:195] Run: systemctl --version
	I1018 13:02:46.356565  943997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:02:46.370259  943997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:02:46.435009  943997 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 13:02:46.419032769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:02:46.435599  943997 kubeconfig.go:125] found "multinode-647331" server: "https://192.168.67.2:8443"
	I1018 13:02:46.436497  943997 api_server.go:166] Checking apiserver status ...
	I1018 13:02:46.436571  943997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 13:02:46.449524  943997 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1220/cgroup
	I1018 13:02:46.458558  943997 api_server.go:182] apiserver freezer: "11:freezer:/docker/be5c02c521fbc468d0e3ac258c1931021fed089a249eb4698af47775d1580cfd/crio/crio-59c41da0d13b127504245dd95b39f250f1ac540c84b91139f6dfb616f67feb7d"
	I1018 13:02:46.458628  943997 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/be5c02c521fbc468d0e3ac258c1931021fed089a249eb4698af47775d1580cfd/crio/crio-59c41da0d13b127504245dd95b39f250f1ac540c84b91139f6dfb616f67feb7d/freezer.state
	I1018 13:02:46.467013  943997 api_server.go:204] freezer state: "THAWED"
	I1018 13:02:46.467039  943997 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 13:02:46.477537  943997 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 13:02:46.477575  943997 status.go:463] multinode-647331 apiserver status = Running (err=<nil>)
	I1018 13:02:46.477596  943997 status.go:176] multinode-647331 status: &{Name:multinode-647331 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 13:02:46.477627  943997 status.go:174] checking status of multinode-647331-m02 ...
	I1018 13:02:46.477930  943997 cli_runner.go:164] Run: docker container inspect multinode-647331-m02 --format={{.State.Status}}
	I1018 13:02:46.496848  943997 status.go:371] multinode-647331-m02 host status = "Running" (err=<nil>)
	I1018 13:02:46.496878  943997 host.go:66] Checking if "multinode-647331-m02" exists ...
	I1018 13:02:46.497220  943997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-647331-m02
	I1018 13:02:46.515155  943997 host.go:66] Checking if "multinode-647331-m02" exists ...
	I1018 13:02:46.515504  943997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 13:02:46.515551  943997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-647331-m02
	I1018 13:02:46.533402  943997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34017 SSHKeyPath:/home/jenkins/minikube-integration/21647-834184/.minikube/machines/multinode-647331-m02/id_rsa Username:docker}
	I1018 13:02:46.633007  943997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 13:02:46.646942  943997 status.go:176] multinode-647331-m02 status: &{Name:multinode-647331-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 13:02:46.646979  943997 status.go:174] checking status of multinode-647331-m03 ...
	I1018 13:02:46.647297  943997 cli_runner.go:164] Run: docker container inspect multinode-647331-m03 --format={{.State.Status}}
	I1018 13:02:46.670770  943997 status.go:371] multinode-647331-m03 host status = "Stopped" (err=<nil>)
	I1018 13:02:46.670800  943997 status.go:384] host is not running, skipping remaining checks
	I1018 13:02:46.670809  943997 status.go:176] multinode-647331-m03 status: &{Name:multinode-647331-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-647331 node start m03 -v=5 --alsologtostderr: (7.3888262s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-647331
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-647331
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-647331: (25.087112977s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-647331 --wait=true -v=5 --alsologtostderr
E1018 13:03:42.457185  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-647331 --wait=true -v=5 --alsologtostderr: (49.313281133s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-647331
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-647331 node delete m03: (4.987932945s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-647331 stop: (23.822943977s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-647331 status: exit status 7 (100.008973ms)

                                                
                                                
-- stdout --
	multinode-647331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-647331-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr: exit status 7 (95.61337ms)

                                                
                                                
-- stdout --
	multinode-647331
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-647331-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:04:39.039069  951766 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:04:39.039203  951766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:04:39.039214  951766 out.go:374] Setting ErrFile to fd 2...
	I1018 13:04:39.039221  951766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:04:39.039463  951766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:04:39.039693  951766 out.go:368] Setting JSON to false
	I1018 13:04:39.039738  951766 mustload.go:65] Loading cluster: multinode-647331
	I1018 13:04:39.039805  951766 notify.go:220] Checking for updates...
	I1018 13:04:39.041005  951766 config.go:182] Loaded profile config "multinode-647331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:04:39.041031  951766 status.go:174] checking status of multinode-647331 ...
	I1018 13:04:39.041733  951766 cli_runner.go:164] Run: docker container inspect multinode-647331 --format={{.State.Status}}
	I1018 13:04:39.058676  951766 status.go:371] multinode-647331 host status = "Stopped" (err=<nil>)
	I1018 13:04:39.058700  951766 status.go:384] host is not running, skipping remaining checks
	I1018 13:04:39.058708  951766 status.go:176] multinode-647331 status: &{Name:multinode-647331 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 13:04:39.058733  951766 status.go:174] checking status of multinode-647331-m02 ...
	I1018 13:04:39.059052  951766 cli_runner.go:164] Run: docker container inspect multinode-647331-m02 --format={{.State.Status}}
	I1018 13:04:39.085574  951766 status.go:371] multinode-647331-m02 host status = "Stopped" (err=<nil>)
	I1018 13:04:39.085600  951766 status.go:384] host is not running, skipping remaining checks
	I1018 13:04:39.085608  951766 status.go:176] multinode-647331-m02 status: &{Name:multinode-647331-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-647331 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-647331 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.972651731s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-647331 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-647331
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-647331-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-647331-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.965675ms)

                                                
                                                
-- stdout --
	* [multinode-647331-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-647331-m02' is duplicated with machine name 'multinode-647331-m02' in profile 'multinode-647331'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-647331-m03 --driver=docker  --container-runtime=crio
E1018 13:05:39.671828  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-647331-m03 --driver=docker  --container-runtime=crio: (32.994657372s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-647331
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-647331: exit status 80 (376.531025ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-647331 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-647331-m03 already exists in multinode-647331-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-647331-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-647331-m03: (2.073793619s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.59s)

                                                
                                    
x
+
TestPreload (131.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-545316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-545316 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.185964957s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-545316 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-545316 image pull gcr.io/k8s-minikube/busybox: (2.367765794s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-545316
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-545316: (5.87170504s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-545316 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-545316 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (57.597573076s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-545316 image list
helpers_test.go:175: Cleaning up "test-preload-545316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-545316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-545316: (2.438985115s)
--- PASS: TestPreload (131.71s)

                                                
                                    
x
+
TestScheduledStopUnix (109.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-422223 --memory=3072 --driver=docker  --container-runtime=crio
E1018 13:08:42.457855  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-422223 --memory=3072 --driver=docker  --container-runtime=crio: (33.092740861s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422223 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-422223 -n scheduled-stop-422223
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422223 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 13:08:56.995761  836086 retry.go:31] will retry after 81.653µs: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:56.996224  836086 retry.go:31] will retry after 201.595µs: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:56.997110  836086 retry.go:31] will retry after 335.116µs: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:56.998517  836086 retry.go:31] will retry after 197.481µs: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:56.998914  836086 retry.go:31] will retry after 705.467µs: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.000017  836086 retry.go:31] will retry after 558.482µs: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.003792  836086 retry.go:31] will retry after 1.099755ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.005037  836086 retry.go:31] will retry after 2.46991ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.008241  836086 retry.go:31] will retry after 1.470976ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.010506  836086 retry.go:31] will retry after 2.538431ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.013749  836086 retry.go:31] will retry after 3.028869ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.016946  836086 retry.go:31] will retry after 5.434656ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.023277  836086 retry.go:31] will retry after 13.548353ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.037506  836086 retry.go:31] will retry after 24.92332ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.062901  836086 retry.go:31] will retry after 30.045531ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.097396  836086 retry.go:31] will retry after 26.784248ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
I1018 13:08:57.124601  836086 retry.go:31] will retry after 55.485489ms: open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/scheduled-stop-422223/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422223 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-422223 -n scheduled-stop-422223
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-422223
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-422223 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-422223
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-422223: exit status 7 (73.189925ms)

                                                
                                                
-- stdout --
	scheduled-stop-422223
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-422223 -n scheduled-stop-422223
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-422223 -n scheduled-stop-422223: exit status 7 (68.500465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-422223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-422223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-422223: (4.3672448s)
--- PASS: TestScheduledStopUnix (109.15s)

                                                
                                    
x
+
TestInsufficientStorage (13.34s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-798585 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-798585 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.716232047s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b471a923-b029-453d-85b3-49fe824bdf2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-798585] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb7e4373-753a-4206-adf4-80eee7b2e5d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"d7960f10-846e-4601-8b8f-203cd6d92384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6a2dd038-aa26-4452-85ad-00d6494f7987","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig"}}
	{"specversion":"1.0","id":"6771593c-0e66-43e0-be41-2ed3ba75e0f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube"}}
	{"specversion":"1.0","id":"81007232-d022-431d-86d9-235007e9a18d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9dff3839-ef56-4f5e-97c3-7aa152bd3414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a1829763-b2c7-4a38-915a-f3b32deb7e7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"37f02625-05fc-4ec2-9c32-5f91be82f330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1874f9a0-2027-4d53-ad0c-a0e3bc33c851","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"abdaf260-3374-489b-a2b2-cac4523c193e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7cd0cfb5-e6fa-45c2-b9ff-7da4a6d9f956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-798585\" primary control-plane node in \"insufficient-storage-798585\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"57a787ef-6e9b-41f6-8ca8-0309f3b263c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e99f8c5e-324e-4c93-83d8-ece9985c1618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4778abb0-10a3-4e1a-bac5-b70559891d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-798585 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-798585 --output=json --layout=cluster: exit status 7 (310.28291ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-798585","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-798585","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 13:10:23.529179  967968 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-798585" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-798585 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-798585 --output=json --layout=cluster: exit status 7 (308.868973ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-798585","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-798585","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 13:10:23.838502  968033 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-798585" does not appear in /home/jenkins/minikube-integration/21647-834184/kubeconfig
	E1018 13:10:23.848870  968033 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/insufficient-storage-798585/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-798585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-798585
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-798585: (2.001893731s)
--- PASS: TestInsufficientStorage (13.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1290459505 start -p running-upgrade-273873 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1018 13:15:39.670717  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1290459505 start -p running-upgrade-273873 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.127735994s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-273873 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-273873 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.217378328s)
helpers_test.go:175: Cleaning up "running-upgrade-273873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-273873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-273873: (2.17508362s)
--- PASS: TestRunningBinaryUpgrade (56.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.684349779s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-022190
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-022190: (1.336269901s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-022190 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-022190 status --format={{.Host}}: exit status 7 (72.605525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.867580177s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-022190 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (124.924237ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-022190] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-022190
	    minikube start -p kubernetes-upgrade-022190 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0221902 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-022190 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-022190 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.450802286s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-022190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-022190
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-022190: (2.972026846s)
--- PASS: TestKubernetesUpgrade (347.62s)

                                                
                                    
x
+
TestMissingContainerUpgrade (112.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1845290403 start -p missing-upgrade-972770 --memory=3072 --driver=docker  --container-runtime=crio
E1018 13:10:39.670203  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1845290403 start -p missing-upgrade-972770 --memory=3072 --driver=docker  --container-runtime=crio: (1m3.716474526s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-972770
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-972770
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-972770 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-972770 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.913873379s)
helpers_test.go:175: Cleaning up "missing-upgrade-972770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-972770
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-972770: (2.024635834s)
--- PASS: TestMissingContainerUpgrade (112.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-166782 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-166782 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (109.17469ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-166782] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (51.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-166782 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-166782 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (51.292934986s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-166782 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (51.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (118.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m55.988818863s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-166782 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-166782 status -o json: exit status 2 (318.846794ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-166782","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-166782
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-166782: (2.262649814s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (118.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-166782 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.82351609s)
--- PASS: TestNoKubernetes/serial/Start (8.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-166782 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-166782 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.989916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
E1018 13:13:25.528451  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:13:42.457675  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:13:42.753341  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-arm64 profile list: (17.629740761s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (15.039467253s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-166782
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-166782: (1.297157702s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-166782 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-166782 --driver=docker  --container-runtime=crio: (7.733622764s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-166782 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-166782 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.822596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (58.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3238465422 start -p stopped-upgrade-311504 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3238465422 start -p stopped-upgrade-311504 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.456466598s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3238465422 -p stopped-upgrade-311504 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3238465422 -p stopped-upgrade-311504 stop: (1.257894959s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-311504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-311504 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.366429307s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (58.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-311504
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-311504: (1.315703577s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestPause/serial/Start (78.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-581407 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-581407 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.539901592s)
--- PASS: TestPause/serial/Start (78.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (23.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-581407 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-581407 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.800336079s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (23.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-633218 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-633218 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (247.136276ms)

                                                
                                                
-- stdout --
	* [false-633218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:18:11.287828 1005034 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:18:11.287975 1005034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:18:11.287984 1005034 out.go:374] Setting ErrFile to fd 2...
	I1018 13:18:11.287989 1005034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:18:11.288273 1005034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-834184/.minikube/bin
	I1018 13:18:11.288712 1005034 out.go:368] Setting JSON to false
	I1018 13:18:11.289604 1005034 start.go:131] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18044,"bootTime":1760775448,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1018 13:18:11.289673 1005034 start.go:141] virtualization:  
	I1018 13:18:11.296230 1005034 out.go:179] * [false-633218] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:18:11.299484 1005034 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:18:11.299524 1005034 notify.go:220] Checking for updates...
	I1018 13:18:11.306235 1005034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:18:11.311820 1005034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-834184/kubeconfig
	I1018 13:18:11.315113 1005034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-834184/.minikube
	I1018 13:18:11.318150 1005034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:18:11.321062 1005034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:18:11.324577 1005034 config.go:182] Loaded profile config "force-systemd-flag-882807": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 13:18:11.324679 1005034 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:18:11.380226 1005034 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:18:11.380382 1005034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:18:11.441749 1005034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 13:18:11.427782044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:18:11.441854 1005034 docker.go:318] overlay module found
	I1018 13:18:11.445947 1005034 out.go:179] * Using the docker driver based on user configuration
	I1018 13:18:11.448789 1005034 start.go:305] selected driver: docker
	I1018 13:18:11.448806 1005034 start.go:925] validating driver "docker" against <nil>
	I1018 13:18:11.448820 1005034 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:18:11.452529 1005034 out.go:203] 
	W1018 13:18:11.458407 1005034 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 13:18:11.461333 1005034 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-633218 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-633218" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-633218

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-633218"

                                                
                                                
----------------------- debugLogs end: false-633218 [took: 4.602240738s] --------------------------------
helpers_test.go:175: Cleaning up "false-633218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-633218
--- PASS: TestNetworkPlugins/group/false (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1018 13:20:39.670223  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.814100475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-460322 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1c6bf5f2-e479-4cab-8117-2ce11ae04d08] Pending
helpers_test.go:352: "busybox" [1c6bf5f2-e479-4cab-8117-2ce11ae04d08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1c6bf5f2-e479-4cab-8117-2ce11ae04d08] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00401174s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-460322 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-460322 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-460322 --alsologtostderr -v=3: (12.026279204s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322: exit status 7 (73.104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-460322 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-460322 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.707699526s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460322 -n old-k8s-version-460322
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sxt4n" [ee1a1889-ff95-440a-b07e-321beed40111] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003650222s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-sxt4n" [ee1a1889-ff95-440a-b07e-321beed40111] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003317126s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-460322 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-460322 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m8.52372694s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-779884 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4fe34383-2a51-4ea1-b880-6976f0c5dfbf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4fe34383-2a51-4ea1-b880-6976f0c5dfbf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003457527s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-779884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-779884 --alsologtostderr -v=3
E1018 13:23:42.457582  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-779884 --alsologtostderr -v=3: (12.025893788s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884: exit status 7 (78.517601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-779884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-779884 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.659843424s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-779884 -n no-preload-779884
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.807271991s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qspqp" [74c50271-0bfb-4ad7-a703-461627cec95c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003429204s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qspqp" [74c50271-0bfb-4ad7-a703-461627cec95c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00383783s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-779884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-779884 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 13:25:39.670377  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.198132298s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-774829 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [81334b31-a289-4f5b-8a24-8624dec0226c] Pending
E1018 13:25:44.825195  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:44.831516  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:44.842781  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:44.864146  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:44.905517  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:44.986881  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:45.148203  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:45.470358  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [81334b31-a289-4f5b-8a24-8624dec0226c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1018 13:25:46.111798  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:47.393395  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [81334b31-a289-4f5b-8a24-8624dec0226c] Running
E1018 13:25:49.955304  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004363006s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-774829 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-774829 --alsologtostderr -v=3
E1018 13:26:05.319987  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-774829 --alsologtostderr -v=3: (12.045407201s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829: exit status 7 (72.687705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-774829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 13:26:25.801344  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-774829 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.404247438s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-774829 -n embed-certs-774829
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40] Pending
helpers_test.go:352: "busybox" [efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [efa1431b-7aa6-4fac-8f3a-3ef14ac8ad40] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.006924892s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-208258 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-208258 --alsologtostderr -v=3: (12.153493609s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258: exit status 7 (241.877232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-208258 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-208258 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.217190843s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-208258 -n default-k8s-diff-port-208258
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vk5gp" [03a1777f-b7cc-407d-9621-3fa0e485871b] Running
E1018 13:27:06.763286  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004378358s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vk5gp" [03a1777f-b7cc-407d-9621-3fa0e485871b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004251749s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-774829 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-774829 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.209210174s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5t7tq" [cd845222-6a66-4024-b059-0be5c4fed286] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009314304s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5t7tq" [cd845222-6a66-4024-b059-0be5c4fed286] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00433527s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-208258 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-977407 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-977407 --alsologtostderr -v=3: (1.374690981s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-208258 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407: exit status 7 (121.280813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-977407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-977407 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (20.508847681s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977407 -n newest-cni-977407
E1018 13:28:30.368355  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1018 13:28:28.684600  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:29.723297  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:29.729565  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:29.740823  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:29.762139  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:29.803473  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:29.884809  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:30.046234  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m32.957035084s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-977407 image list --format=json
E1018 13:28:31.009996  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1018 13:28:42.457484  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:28:50.216606  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:10.698279  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:29:51.659879  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.617755713s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-633218 "pgrep -a kubelet"
I1018 13:29:54.091953  836086 config.go:182] Loaded profile config "auto-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-633218 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9kxm2" [7a372280-9ef5-4baa-b50d-a434c25c3812] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9kxm2" [7a372280-9ef5-4baa-b50d-a434c25c3812] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004308189s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ll2fc" [701b685c-c6ee-4976-86c8-d877c13234ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.010980481s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-633218 "pgrep -a kubelet"
I1018 13:30:15.227840  836086 config.go:182] Loaded profile config "kindnet-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-633218 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fwvmh" [8c4a5a5c-879f-42eb-85cf-86fbe53de348] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fwvmh" [8c4a5a5c-879f-42eb-85cf-86fbe53de348] Running
E1018 13:30:22.755181  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/functional-767781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004811285s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.153336142s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1018 13:31:12.526570  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/old-k8s-version-460322/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:13.581237  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.298194  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.304949  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.316675  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.338087  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.379830  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.461507  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.623116  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:35.945137  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:36.586934  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.538370331s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6dtn7" [2dfacd7d-2cc4-434b-959c-6f2b6320572d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1018 13:31:37.868530  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:31:40.430511  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-6dtn7" [2dfacd7d-2cc4-434b-959c-6f2b6320572d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005159204s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-633218 "pgrep -a kubelet"
I1018 13:31:43.753250  836086 config.go:182] Loaded profile config "calico-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-633218 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z8jnn" [ccdfc744-1533-433c-9cd5-baaf95d6c9e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 13:31:45.553275  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-z8jnn" [ccdfc744-1533-433c-9cd5-baaf95d6c9e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005577632s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-633218 "pgrep -a kubelet"
I1018 13:31:56.357685  836086 config.go:182] Loaded profile config "custom-flannel-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-633218 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hp46g" [803f00a1-abfe-4ffd-8542-13a4861c4be2] Pending
helpers_test.go:352: "netcat-cd4db9dbf-hp46g" [803f00a1-abfe-4ffd-8542-13a4861c4be2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hp46g" [803f00a1-abfe-4ffd-8542-13a4861c4be2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004805283s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m26.652262697s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1018 13:32:57.238675  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/default-k8s-diff-port-208258/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:33:29.723728  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.740939495s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rmz7t" [9a4f51d4-f606-474c-9a59-d85427b42bc1] Running
E1018 13:33:42.457533  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/addons-206214/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004595892s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-633218 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-633218 "pgrep -a kubelet"
I1018 13:33:45.807610  836086 config.go:182] Loaded profile config "flannel-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-633218 replace --force -f testdata/netcat-deployment.yaml
I1018 13:33:45.821306  836086 config.go:182] Loaded profile config "enable-default-cni-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmfhh" [f56020ac-bc5a-4122-a669-76b919df99ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gmfhh" [f56020ac-bc5a-4122-a669-76b919df99ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004010319s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-633218 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vgl7g" [b139c8b0-7faa-48fe-a66d-0aac012c1377] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vgl7g" [b139c8b0-7faa-48fe-a66d-0aac012c1377] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009857962s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1018 13:33:57.422590  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/no-preload-779884/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-633218 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (43.606972319s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-633218 "pgrep -a kubelet"
I1018 13:35:06.590256  836086 config.go:182] Loaded profile config "bridge-633218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-633218 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bckkr" [03eb20d4-0aca-45f5-a0d2-483d096d33f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 13:35:08.781914  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:08.788285  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:08.799840  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:08.821510  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:08.863114  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:08.944654  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:09.106209  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:09.428202  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:10.070398  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bckkr" [03eb20d4-0aca-45f5-a0d2-483d096d33f9] Running
E1018 13:35:11.351876  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:13.913639  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/kindnet-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:35:14.843811  836086 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-834184/.minikube/profiles/auto-633218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003474779s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-633218 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-633218 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-581361 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-581361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-581361
--- SKIP: TestDownloadOnlyKic (0.68s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-157679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-157679
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-633218 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-633218" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-633218

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-633218"

                                                
                                                
----------------------- debugLogs end: kubenet-633218 [took: 4.982199027s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-633218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-633218
--- SKIP: TestNetworkPlugins/group/kubenet (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-633218 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-633218" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-633218

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-633218" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-633218"

                                                
                                                
----------------------- debugLogs end: cilium-633218 [took: 4.914400268s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-633218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-633218
--- SKIP: TestNetworkPlugins/group/cilium (5.11s)

                                                
                                    
Copied to clipboard